WireGuard® is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. It aims to be faster, simpler, leaner, and more useful than IPsec, while avoiding the massive headache. It intends to be considerably more performant than OpenVPN. WireGuard is designed as a general purpose VPN for running on embedded interfaces and super computers alike, fit for many different circumstances. Initially released for the Linux kernel, it is now cross-platform (Windows, macOS, BSD, iOS, Android) and widely deployable. It is currently under heavy development, but already it might be regarded as the most secure, easiest to use, and simplest VPN solution in the industry.
Supported Architectures
We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here.
Simply pulling lscr.io/linuxserver/wireguard:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags.
The architectures supported by this image are:
Architecture
Available
Tag
x86-64
✅
amd64-\<version tag>
arm64
✅
arm64v8-\<version tag>
armhf
✅
arm32v7-\<version tag>
Version Tags
This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags.
Tag
Available
Description
latest
✅
Stable releases with support for compiling Wireguard modules
alpine
✅
Stable releases based on Alpine without support for compiling Wireguard modules
Application Setup
During container start, it will first check if the wireguard module is already installed and loaded. Kernels newer than 5.6 generally have the wireguard module built-in (along with some older custom kernels). However, the module may not be enabled. Make sure it is enabled prior to starting the container.
If the kernel is not built-in, or installed on host, the container will check if the kernel headers are present (in /usr/src) and if not, it will attempt to download the necessary kernel headers from the ubuntu xenial/bionic, debian/raspbian buster repos; then will attempt to compile and install the kernel module. If the kernel headers are not found in either usr/src or in the repos mentioned, container will sleep indefinitely as wireguard cannot be installed.
If you're on a debian/ubuntu based host with a custom or downstream distro provided kernel (ie. Pop!_OS), the container won't be able to install the kernel headers from the regular ubuntu and debian repos. In those cases, you can try installing the headers on the host via sudo apt install linux-headers-$(uname -r) (if distro version) and then add a volume mapping for /usr/src:/usr/src, or if custom built, map the location of the existing headers to allow the container to use host installed headers to build the kernel module (tested successful on Pop!_OS, ymmv).
With regards to arm32/64 devices, Raspberry Pi 2-4 running the official ubuntu images or Raspbian Buster are supported out of the box. For all other devices and OSes, you can try installing the kernel headers on the host, and mapping /usr/src:/usr/src and it may just work (no guarantees).
This can be run as a server or a client, based on the parameters used.
Server Mode
If the environment variable PEERS is set to a number or a list of strings separated by comma, the container will run in server mode and the necessary server and peer/client confs will be generated. The peer/client config qr codes will be output in the docker log. They will also be saved in text and png format under /config/peerX in case PEERS is a variable and an integer or /config/peer_X in case a list of names was provided instead of an integer.
Variables SERVERURL, SERVERPORT, INTERNAL_SUBNET and PEERDNS are optional variables used for server mode. Any changes to these environment variables will trigger regeneration of server and peer confs. Peer/client confs will be recreated with existing private/public keys. Delete the peer folders for the keys to be recreated along with the confs.
To add more peers/clients later on, you increment the PEERS environment variable or add more elements to the list and recreate the container.
To display the QR codes of active peers again, you can use the following command and list the peer numbers as arguments: docker exec -it wireguard /app/show-peer 1 4 5 or docker exec -it wireguard /app/show-peer myPC myPhone myTablet (Keep in mind that the QR codes are also stored as PNGs in the config folder).
The templates used for server and peer confs are saved under /config/templates. Advanced users can modify these templates and force conf generation by deleting /config/wg0.conf and restarting the container.
Client Mode
Do not set the PEERS environment variable. Drop your client conf into the config folder as /config/wg0.conf and start the container.
If you get IPv6 related errors in the log and connection cannot be established, edit the AllowedIPs line in your peer/client wg0.conf to include only 0.0.0.0/0 and not ::/0; and restart the container.
Road warriors, roaming and returning home
If you plan to use Wireguard both remotely and locally, say on your mobile phone, you will need to consider routing. Most firewalls will not route ports forwarded on your WAN interface correctly to the LAN out of the box. This means that when you return home, even though you can see the Wireguard server, the return packets will probably get lost.
This is not a Wireguard specific issue and the two generally accepted solutions are NAT reflection (setting your edge router/firewall up in such a way as it translates internal packets correctly) or split horizon DNS (setting your internal DNS to return the private rather than public IP when connecting locally).
Both of these approaches have positives and negatives however their setup is out of scope for this document as everyone's network layout and equipment will be different.
Maintaining local access to attached services
** Note: This is not a supported configuration by Linuxserver.io - use at your own risk.
When routing via Wireguard from another container using the service option in docker, you might lose access to the containers webUI locally. To avoid this, exclude the docker subnet from being routed via Wireguard by modifying your wg0.conf like so (modifying the subnets as you require):
[Interface] PrivateKey = <private key> Address = 9.8.7.6/32 DNS = 8.8.8.8 PostUp = DROUTE=$(ip route | grep default | awk '{print $3}'); HOMENET=192.168.0.0/16; HOMENET2=10.0.0.0/8; HOMENET3=172.16.0.0/12; ip route add $HOMENET3 via $DROUTE;ip route add $HOMENET2 via $DROUTE; ip route add $HOMENET via $DROUTE;iptables -I OUTPUT -d $HOMENET -j ACCEPT;iptables -A OUTPUT -d $HOMENET2 -j ACCEPT; iptables -A OUTPUT -d $HOMENET3 -j ACCEPT; iptables -A OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT PreDown = HOMENET=192.168.0.0/16; HOMENET2=10.0.0.0/8; HOMENET3=172.16.0.0/12; ip route del $HOMENET3 via $DROUTE;ip route del $HOMENET2 via $DROUTE; ip route del $HOMENET via $DROUTE; iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT; iptables -D OUTPUT -d $HOMENET -j ACCEPT; iptables -D OUTPUT -d $HOMENET2 -j ACCEPT; iptables -D OUTPUT -d $HOMENET3 -j ACCEPT
Site-to-site VPN
** Note: This is not a supported configuration by Linuxserver.io - use at your own risk.
Site-to-site VPN in server mode requires customizing the AllowedIPs statement for a specific peer in wg0.conf. Since wg0.conf is autogenerated when server vars are changed, it is not recommended to edit it manually.
In order to customize the AllowedIPs statement for a specific peer in wg0.conf, you can set an env var SERVER_ALLOWEDIPS_PEER_<peer name or number> to the additional subnets you'd like to add, comma separated and excluding the peer IP (ie. "192.168.1.0/24,192.168.2.0/24"). Replace <peer name or number> with either the name or number of a peer (whichever is used in the PEERS var).
For instance SERVER_ALLOWEDIPS_PEER_laptop="192.168.1.0/24,192.168.2.0/24" will result in the wg0.conf entry AllowedIPs = 10.13.13.2,192.168.1.0/24,192.168.2.0/24 for the peer named laptop.
Keep in mind that this var will only be considered when the confs are regenerated. Adding this var for an existing peer won't force a regeneration. You can delete wg0.conf and restart the container to force regeneration if necessary.
Don't forget to set the necessary POSTUP and POSTDOWN rules in your client's peer conf for lan access.
Usage
To help you get started creating a container from this image you can either use docker-compose or the docker cli.
WireGuard® is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. It aims to be faster, simpler, leaner, and more useful than IPsec, while avoiding the massive headache. It intends to be considerably more performant than OpenVPN. WireGuard is designed as a general purpose VPN for running on embedded interfaces and super computers alike, fit for many different circumstances. Initially released for the Linux kernel, it is now cross-platform (Windows, macOS, BSD, iOS, Android) and widely deployable. It is currently under heavy development, but already it might be regarded as the most secure, easiest to use, and simplest VPN solution in the industry.
Supported Architectures
We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here.
Simply pulling lscr.io/linuxserver/wireguard:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags.
The architectures supported by this image are:
Architecture
Available
Tag
x86-64
✅
amd64-\<version tag>
arm64
✅
arm64v8-\<version tag>
armhf
✅
arm32v7-\<version tag>
Version Tags
This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags.
Tag
Available
Description
latest
✅
Stable releases with support for compiling Wireguard modules
alpine
✅
Stable releases based on Alpine without support for compiling Wireguard modules
Application Setup
During container start, it will first check if the wireguard module is already installed and loaded. Kernels newer than 5.6 generally have the wireguard module built-in (along with some older custom kernels). However, the module may not be enabled. Make sure it is enabled prior to starting the container.
If the kernel is not built-in, or installed on host, the container will check if the kernel headers are present (in /usr/src) and if not, it will attempt to download the necessary kernel headers from the ubuntu xenial/bionic, debian/raspbian buster repos; then will attempt to compile and install the kernel module. If the kernel headers are not found in either usr/src or in the repos mentioned, container will sleep indefinitely as wireguard cannot be installed.
If you're on a debian/ubuntu based host with a custom or downstream distro provided kernel (ie. Pop!_OS), the container won't be able to install the kernel headers from the regular ubuntu and debian repos. In those cases, you can try installing the headers on the host via sudo apt install linux-headers-$(uname -r) (if distro version) and then add a volume mapping for /usr/src:/usr/src, or if custom built, map the location of the existing headers to allow the container to use host installed headers to build the kernel module (tested successful on Pop!_OS, ymmv).
With regards to arm32/64 devices, Raspberry Pi 2-4 running the official ubuntu images or Raspbian Buster are supported out of the box. For all other devices and OSes, you can try installing the kernel headers on the host, and mapping /usr/src:/usr/src and it may just work (no guarantees).
This can be run as a server or a client, based on the parameters used.
Server Mode
If the environment variable PEERS is set to a number or a list of strings separated by comma, the container will run in server mode and the necessary server and peer/client confs will be generated. The peer/client config qr codes will be output in the docker log. They will also be saved in text and png format under /config/peerX in case PEERS is a variable and an integer or /config/peer_X in case a list of names was provided instead of an integer.
Variables SERVERURL, SERVERPORT, INTERNAL_SUBNET and PEERDNS are optional variables used for server mode. Any changes to these environment variables will trigger regeneration of server and peer confs. Peer/client confs will be recreated with existing private/public keys. Delete the peer folders for the keys to be recreated along with the confs.
To add more peers/clients later on, you increment the PEERS environment variable or add more elements to the list and recreate the container.
To display the QR codes of active peers again, you can use the following command and list the peer numbers as arguments: docker exec -it wireguard /app/show-peer 1 4 5 or docker exec -it wireguard /app/show-peer myPC myPhone myTablet (Keep in mind that the QR codes are also stored as PNGs in the config folder).
The templates used for server and peer confs are saved under /config/templates. Advanced users can modify these templates and force conf generation by deleting /config/wg0.conf and restarting the container.
Client Mode
Do not set the PEERS environment variable. Drop your client conf into the config folder as /config/wg0.conf and start the container.
If you get IPv6 related errors in the log and connection cannot be established, edit the AllowedIPs line in your peer/client wg0.conf to include only 0.0.0.0/0 and not ::/0; and restart the container.
Road warriors, roaming and returning home
If you plan to use Wireguard both remotely and locally, say on your mobile phone, you will need to consider routing. Most firewalls will not route ports forwarded on your WAN interface correctly to the LAN out of the box. This means that when you return home, even though you can see the Wireguard server, the return packets will probably get lost.
This is not a Wireguard specific issue and the two generally accepted solutions are NAT reflection (setting your edge router/firewall up in such a way as it translates internal packets correctly) or split horizon DNS (setting your internal DNS to return the private rather than public IP when connecting locally).
Both of these approaches have positives and negatives however their setup is out of scope for this document as everyone's network layout and equipment will be different.
Maintaining local access to attached services
** Note: This is not a supported configuration by Linuxserver.io - use at your own risk.
When routing via Wireguard from another container using the service option in docker, you might lose access to the containers webUI locally. To avoid this, exclude the docker subnet from being routed via Wireguard by modifying your wg0.conf like so (modifying the subnets as you require):
ini [Interface] PrivateKey = <private key> Address = 9.8.7.6/32 DNS = 8.8.8.8 PostUp = DROUTE=$(ip route | grep default | awk '{print $3}'); HOMENET=192.168.0.0/16; HOMENET2=10.0.0.0/8; HOMENET3=172.16.0.0/12; ip route add $HOMENET3 via $DROUTE;ip route add $HOMENET2 via $DROUTE; ip route add $HOMENET via $DROUTE;iptables -I OUTPUT -d $HOMENET -j ACCEPT;iptables -A OUTPUT -d $HOMENET2 -j ACCEPT; iptables -A OUTPUT -d $HOMENET3 -j ACCEPT; iptables -A OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT PreDown = HOMENET=192.168.0.0/16; HOMENET2=10.0.0.0/8; HOMENET3=172.16.0.0/12; ip route del $HOMENET3 via $DROUTE;ip route del $HOMENET2 via $DROUTE; ip route del $HOMENET via $DROUTE; iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT; iptables -D OUTPUT -d $HOMENET -j ACCEPT; iptables -D OUTPUT -d $HOMENET2 -j ACCEPT; iptables -D OUTPUT -d $HOMENET3 -j ACCEPT
Site-to-site VPN
** Note: This is not a supported configuration by Linuxserver.io - use at your own risk.
Site-to-site VPN in server mode requires customizing the AllowedIPs statement for a specific peer in wg0.conf. Since wg0.conf is autogenerated when server vars are changed, it is not recommended to edit it manually.
In order to customize the AllowedIPs statement for a specific peer in wg0.conf, you can set an env var SERVER_ALLOWEDIPS_PEER_<peer name or number> to the additional subnets you'd like to add, comma separated and excluding the peer IP (ie. "192.168.1.0/24,192.168.2.0/24"). Replace <peer name or number> with either the name or number of a peer (whichever is used in the PEERS var).
For instance SERVER_ALLOWEDIPS_PEER_laptop="192.168.1.0/24,192.168.2.0/24" will result in the wg0.conf entry AllowedIPs = 10.13.13.2,192.168.1.0/24,192.168.2.0/24 for the peer named laptop.
Keep in mind that this var will only be considered when the confs are regenerated. Adding this var for an existing peer won't force a regeneration. You can delete wg0.conf and restart the container to force regeneration if necessary.
Don't forget to set the necessary POSTUP and POSTDOWN rules in your client's peer conf for lan access.
Usage
To help you get started creating a container from this image you can either use docker-compose or the docker cli.
Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate <external>:<internal> respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.
Ports (-p)
Parameter
Function
51820/udp
wireguard port
Environment Variables (-e)
Env
Function
PUID=1000
for UserID - see below for explanation
PGID=1000
for GroupID - see below for explanation
TZ=Europe/London
Specify a timezone to use EG Europe/London
SERVERURL=wireguard.domain.com
External IP or domain name for docker host. Used in server mode. If set to auto, the container will try to determine and set the external IP automatically
SERVERPORT=51820
External port for docker host. Used in server mode.
PEERS=1
Number of peers to create confs for. Required for server mode. Can also be a list of names: myPC,myPhone,myTablet (alphanumeric only)
PEERDNS=auto
DNS server set in peer/client configs (can be set as 8.8.8.8). Used in server mode. Defaults to auto, which uses wireguard docker host's DNS via included CoreDNS forward.
INTERNAL_SUBNET=10.13.13.0
Internal subnet for the wireguard and server and peers (only change if it clashes). Used in server mode.
ALLOWEDIPS=0.0.0.0/0
The IPs/Ranges that the peers will be able to reach using the VPN connection. If not specified the default value is: '0.0.0.0/0, ::0/0' This will cause ALL traffic to route through the VPN, if you want split tunneling, set this to only the IPs you would like to use the tunnel AND the ip of the server's WG ip, such as 10.13.13.1.
LOG_CONFS=true
Generated QR codes will be displayed in the docker log. Set to false to skip log output.
Volume Mappings (-v)
Volume
Function
/config
Contains all relevant configuration files.
/lib/modules
Maps host's modules folder.
Miscellaneous Options
Parameter
Function
--sysctl=
Required for client mode.
Portainer notice
{% hint style="warning" %} This image utilises cap_add or sysctl to work properly. This is not implemented properly in some versions of Portainer, thus this image may not work if deployed through Portainer. {% endhint %}
Environment variables from files (Docker secrets)
You can set any environment variable from a file by using a special prepend FILE__.
As an example:
-e FILE__PASSWORD=/run/secrets/mysecretpassword
+
Parameters
Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate <external>:<internal> respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.
Ports (-p)
Parameter
Function
51820/udp
wireguard port
Environment Variables (-e)
Env
Function
PUID=1000
for UserID - see below for explanation
PGID=1000
for GroupID - see below for explanation
TZ=Europe/London
Specify a timezone to use EG Europe/London
SERVERURL=wireguard.domain.com
External IP or domain name for docker host. Used in server mode. If set to auto, the container will try to determine and set the external IP automatically
SERVERPORT=51820
External port for docker host. Used in server mode.
PEERS=1
Number of peers to create confs for. Required for server mode. Can also be a list of names: myPC,myPhone,myTablet (alphanumeric only)
PEERDNS=auto
DNS server set in peer/client configs (can be set as 8.8.8.8). Used in server mode. Defaults to auto, which uses wireguard docker host's DNS via included CoreDNS forward.
INTERNAL_SUBNET=10.13.13.0
Internal subnet for the wireguard and server and peers (only change if it clashes). Used in server mode.
ALLOWEDIPS=0.0.0.0/0
The IPs/Ranges that the peers will be able to reach using the VPN connection. If not specified the default value is: '0.0.0.0/0, ::0/0' This will cause ALL traffic to route through the VPN, if you want split tunneling, set this to only the IPs you would like to use the tunnel AND the ip of the server's WG ip, such as 10.13.13.1.
LOG_CONFS=true
Generated QR codes will be displayed in the docker log. Set to false to skip log output.
Volume Mappings (-v)
Volume
Function
/config
Contains all relevant configuration files.
/lib/modules
Maps host's modules folder. Only required if compiling wireguard modules.
Miscellaneous Options
Parameter
Function
--sysctl=
Required for client mode.
Portainer notice
{% hint style="warning" %} This image utilises cap_add or sysctl to work properly. This is not implemented properly in some versions of Portainer, thus this image may not work if deployed through Portainer. {% endhint %}
Environment variables from files (Docker secrets)
You can set any environment variable from a file by using a special prepend FILE__.
As an example:
-e FILE__PASSWORD=/run/secrets/mysecretpassword
Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.
Umask for running applications
For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.
User / Group Identifiers
When using volumes (-v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID.
Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic.
In this instance PUID=1000 and PGID=1000, to find yours use id user as below:
$ id username
uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)
Docker Mods
We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.
Support Info
Shell access whilst the container is running:
docker exec -it wireguard /bin/bash
To monitor the logs of the container in realtime:
docker logs -f wireguard
Container version number
docker inspect -f '{{ index .Config.Labels "build_version" }}' wireguard
Image version number
docker inspect -f '{{ index .Config.Labels "build_version" }}' lscr.io/linuxserver/wireguard:latest
12.10.22: - Add Alpine branch. Optimize wg and coredns services.
09.10.22: - Switch back to iptables-legacy due to issues on some hosts.
04.10.22: - Rebase to Jammy. Upgrade to s6v3.
16.05.22: - Improve NAT handling in server mode when multiple ethernet devices are present.
23.04.22: - Add pre-shared key support. Automatically added to all new peer confs generated, existing ones are left without to ensure no breaking changes.
10.04.22: - Rebase to Ubuntu Focal. Add LOG_CONFS env var. Remove deprecated add-peer command.
28.10.21: - Add site-to-site vpn support.
11.02.21: - Fix bug related to changing internal subnet and named peer confs not updating.
06.10.20: - Disable CoreDNS in client mode, or if port 53 is already in use in server mode.
04.10.20: - Allow to specify a list of names as PEERS and add ALLOWEDIPS environment variable. Also, add peer name/id to each one of the peer sections in wg0.conf. Important: Existing users need to delete /config/templates/peer.conf and restart
27.09.20: - Cleaning service binding example to have accurate PreDown script.
06.08.20: - Replace resolvconf with openresolv due to dns issues when a client based on this image is connected to a server also based on this image. Add IPv6 info to readme. Display kernel version in logs.
29.07.20: - Update Coredns config to detect dns loops (existing users need to delete /config/coredns/Corefile and restart).
27.07.20: - Update Coredns config to prevent issues with non-user-defined bridge networks (existing users need to delete /config/coredns/Corefile and restart).
05.07.20: - Add Debian updates and security repos for headers.
25.06.20: - Simplify module tests, prevent iptables issues from resulting in false negatives.
19.06.20: - Add support for Ubuntu Focal (20.04) kernels. Compile wireguard tools and kernel module instead of using the ubuntu packages. Make module install optional. Improve verbosity in logs.
29.05.20: - Add support for 64bit raspbian.
28.04.20: - Add Buster/Stretch backports repos for Debian. Tested with OMV 5 and OMV 4 (on kernel 4.19.0-0.bpo.8-amd64).
20.04.20: - Fix typo in client mode conf existence check.
13.04.20: - Fix bug that forced conf recreation on every start.
08.04.20: - Add arm32/64 builds and enable multi-arch (rpi4 with ubuntu and raspbian buster tested). Add CoreDNS for PEERDNS=auto setting. Update the add-peer/show-peer scripts to utilize the templates and the INTERNAL_SUBNET var (previously missed, oops).
05.04.20: - Add INTERNAL_SUBNET variable to prevent subnet clashes. Add templates for server and peer confs.
01.04.20: - Add show-peer script and include info on host installed headers.
31.03.20: - Initial Release.
\ No newline at end of file
diff --git a/search/search_index.json b/search/search_index.json
index c357ba9a8c..bab4f36c54 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Introduction Welcome to the home of the LinuxServer.io documentation! It is our goal to ensure that all of our images are properly documented with all of the relevant information in order to help our users get started. As well as individual set up guides for each of our images, there is also general information pertaining to the running of Docker containers, and best practices. Want to get hold of the team? The team resides primarily in our Discord server. We also have a forum if chat isn't your thing. Where Link Discord https://discord.gg/YWrKVTn Forum https://discourse.linuxserver.io For those interested in our CI environment via Jenkins: https://ci.linuxserver.io/","title":"Introduction"},{"location":"#introduction","text":"Welcome to the home of the LinuxServer.io documentation! It is our goal to ensure that all of our images are properly documented with all of the relevant information in order to help our users get started. As well as individual set up guides for each of our images, there is also general information pertaining to the running of Docker containers, and best practices.","title":"Introduction"},{"location":"#want-to-get-hold-of-the-team","text":"The team resides primarily in our Discord server. We also have a forum if chat isn't your thing. Where Link Discord https://discord.gg/YWrKVTn Forum https://discourse.linuxserver.io For those interested in our CI environment via Jenkins: https://ci.linuxserver.io/","title":"Want to get hold of the team?"},{"location":"FAQ/","text":"FAQ Here will some Frequently Asked Questions reside My host is incompatible with images based on Ubuntu Jammy {#jammy} Some x86_64 hosts running older versions of the Docker engine are not compatible with some images based on Ubuntu Jammy. Symptoms If your host is affected you may see errors in your containers such as: ERROR - Unable to determine java version; make sure Java is installed and callable Or Failed to create CoreCLR, HRESULT: 0x80070008 Or WARNING :: MAIN : webStart.py:initialize:249 : can't start new thread Resolution Option 1 (Long-Term Fix) Upgrade your Docker engine install to at least version 20.10.10 . Refer to the official Docker docs for installation/update details. Option 2 (Short-Term Fix) For Docker CLI, run your container with: --security-opt seccomp=unconfined For Docker Compose, run your container with: security_opt: - seccomp=unconfined My host is incompatible with images based on rdesktop {#rdesktop} Some x86_64 hosts have issues running rdesktop based images even with the latest docker version due to syscalls that are unknown to docker. Symptoms If your host is affected you may see errors in your containers such as: Failed to close file descriptor for child process (Operation not permitted) Resolution For Docker CLI, run your container with: --security-opt seccomp=unconfined For Docker Compose, run your container with: security_opt: - seccomp=unconfined My host is incompatible with images based on Ubuntu Focal and Alpine 3.13 and later {#libseccomp} This only affects 32 bit installs of distros based on Debian Buster. This is due to a bug in the libseccomp2 library (dependency of Docker itself), which is fixed. However it's not pushed to all the repositories. A GitHub issue tracking this You have a few options as noted below. Options 1 is short-term, while option 2 is considered the best option if you don't plan to reinstall the device (option 3). Resolution If you decide to do option 1 or 2, you should just need to restart the container after confirming you have libseccomp2.4.4 installed. If 1 or 2 did not work, ensure your Docker install is at least version 20.10.0, refer to the official Docker docs for installation. Option 1 Manually install an updated version of the library with dpkg. wget http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.4.4-1~bpo10+1_armhf.deb sudo dpkg -i libseccomp2_2.4.4-1~bpo10+1_armhf.deb {% hint style=\"info\" %} This url may have been updated. Find the latest by browsing here . {% endhint %} Option 2 Add the backports repo for DebianBuster. As seen here . sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138 echo \"deb http://deb.debian.org/debian buster-backports main\" | sudo tee -a /etc/apt/sources.list.d/buster-backports.list sudo apt update sudo apt install -t buster-backports libseccomp2 Option 3 Reinstall/update your OS to a version that still gets updates. Any distro based on DebianStretch does not seem to have this package available DebianBuster based distros can get the package trough backports, as outlined in point 2. {% hint style=\"info\" %} RaspberryPI OS (formerly Raspbian) Can be upgraded to run with a 64bit kernel {% endhint %} Symptoms 502 errors in Jellyfin as seen in linuxserver/docker-jellyfin#71 Error starting framework core messages in the docker log for Plex . linuxserver/docker-plex#247 No WebUI for Radarr , even though the container is running. linuxserver/docker-radarr#118 Images based on our Nginx base-image(Nextcloud, SWAG, Nginx, etc.) fails to generate a certificate, with a message similar to error getting time:crypto/asn1/a_time.c:330 docker exec date returns 1970 I want to reverse proxy a application which defaults to https with a selfsigned certificate {#strict-proxy} Traefik {#strict-proxy-traefik} In this example we will configure a serverTransport rule we can apply to a service, as well as telling Traefik to use https on the backend for the service. Create a ServerTransport in your dynamic Traefik configuration, we are calling ours ignorecert . http: serversTransports: ignorecert: insecureSkipVerify: true Then on our foo service we tell it to use this rule, as well as telling Traefik the backend is running on https. - traefik.http.services.foo.loadbalancer.serverstransport=ignorecert - traefik.http.services.foo.loadbalancer.server.scheme=https","title":"FAQ"},{"location":"FAQ/#faq","text":"Here will some Frequently Asked Questions reside","title":"FAQ"},{"location":"FAQ/#my-host-is-incompatible-with-images-based-on-ubuntu-jammy-jammy","text":"Some x86_64 hosts running older versions of the Docker engine are not compatible with some images based on Ubuntu Jammy.","title":"My host is incompatible with images based on Ubuntu Jammy {#jammy}"},{"location":"FAQ/#symptoms","text":"If your host is affected you may see errors in your containers such as: ERROR - Unable to determine java version; make sure Java is installed and callable Or Failed to create CoreCLR, HRESULT: 0x80070008 Or WARNING :: MAIN : webStart.py:initialize:249 : can't start new thread","title":"Symptoms"},{"location":"FAQ/#resolution","text":"","title":"Resolution"},{"location":"FAQ/#option-1-long-term-fix","text":"Upgrade your Docker engine install to at least version 20.10.10 . Refer to the official Docker docs for installation/update details.","title":"Option 1 (Long-Term Fix)"},{"location":"FAQ/#option-2-short-term-fix","text":"For Docker CLI, run your container with: --security-opt seccomp=unconfined For Docker Compose, run your container with: security_opt: - seccomp=unconfined","title":"Option 2 (Short-Term Fix)"},{"location":"FAQ/#my-host-is-incompatible-with-images-based-on-rdesktop-rdesktop","text":"Some x86_64 hosts have issues running rdesktop based images even with the latest docker version due to syscalls that are unknown to docker.","title":"My host is incompatible with images based on rdesktop {#rdesktop}"},{"location":"FAQ/#symptoms_1","text":"If your host is affected you may see errors in your containers such as: Failed to close file descriptor for child process (Operation not permitted)","title":"Symptoms"},{"location":"FAQ/#resolution_1","text":"For Docker CLI, run your container with: --security-opt seccomp=unconfined For Docker Compose, run your container with: security_opt: - seccomp=unconfined","title":"Resolution"},{"location":"FAQ/#my-host-is-incompatible-with-images-based-on-ubuntu-focal-and-alpine-313-and-later-libseccomp","text":"This only affects 32 bit installs of distros based on Debian Buster. This is due to a bug in the libseccomp2 library (dependency of Docker itself), which is fixed. However it's not pushed to all the repositories. A GitHub issue tracking this You have a few options as noted below. Options 1 is short-term, while option 2 is considered the best option if you don't plan to reinstall the device (option 3).","title":"My host is incompatible with images based on Ubuntu Focal and Alpine 3.13 and later {#libseccomp}"},{"location":"FAQ/#resolution_2","text":"If you decide to do option 1 or 2, you should just need to restart the container after confirming you have libseccomp2.4.4 installed. If 1 or 2 did not work, ensure your Docker install is at least version 20.10.0, refer to the official Docker docs for installation.","title":"Resolution"},{"location":"FAQ/#option-1","text":"Manually install an updated version of the library with dpkg. wget http://ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.4.4-1~bpo10+1_armhf.deb sudo dpkg -i libseccomp2_2.4.4-1~bpo10+1_armhf.deb {% hint style=\"info\" %} This url may have been updated. Find the latest by browsing here . {% endhint %}","title":"Option 1"},{"location":"FAQ/#option-2","text":"Add the backports repo for DebianBuster. As seen here . sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138 echo \"deb http://deb.debian.org/debian buster-backports main\" | sudo tee -a /etc/apt/sources.list.d/buster-backports.list sudo apt update sudo apt install -t buster-backports libseccomp2","title":"Option 2"},{"location":"FAQ/#option-3","text":"Reinstall/update your OS to a version that still gets updates. Any distro based on DebianStretch does not seem to have this package available DebianBuster based distros can get the package trough backports, as outlined in point 2. {% hint style=\"info\" %} RaspberryPI OS (formerly Raspbian) Can be upgraded to run with a 64bit kernel {% endhint %}","title":"Option 3"},{"location":"FAQ/#symptoms_2","text":"502 errors in Jellyfin as seen in linuxserver/docker-jellyfin#71 Error starting framework core messages in the docker log for Plex . linuxserver/docker-plex#247 No WebUI for Radarr , even though the container is running. linuxserver/docker-radarr#118 Images based on our Nginx base-image(Nextcloud, SWAG, Nginx, etc.) fails to generate a certificate, with a message similar to error getting time:crypto/asn1/a_time.c:330 docker exec date returns 1970","title":"Symptoms"},{"location":"FAQ/#i-want-to-reverse-proxy-a-application-which-defaults-to-https-with-a-selfsigned-certificate-strict-proxy","text":"","title":"I want to reverse proxy a application which defaults to https with a selfsigned certificate {#strict-proxy}"},{"location":"FAQ/#traefik-strict-proxy-traefik","text":"In this example we will configure a serverTransport rule we can apply to a service, as well as telling Traefik to use https on the backend for the service. Create a ServerTransport in your dynamic Traefik configuration, we are calling ours ignorecert . http: serversTransports: ignorecert: insecureSkipVerify: true Then on our foo service we tell it to use this rule, as well as telling Traefik the backend is running on https. - traefik.http.services.foo.loadbalancer.serverstransport=ignorecert - traefik.http.services.foo.loadbalancer.server.scheme=https","title":"Traefik {#strict-proxy-traefik}"},{"location":"general/awesome-lsio/","text":"Awesome LSIO Administration Container Description doublecommander Double Commander is a free cross platform open source file manager with two panels side by side. It is inspired by Total Commander and features some new ideas. endlessh endlessh is an SSH tarpit that very slowly sends an endless, random SSH banner. It keeps SSH clients locked up for hours or even days at a time. The purpose is to put your real SSH server on another port and then let the script kiddies get stuck in this tarpit instead of bothering a real server. ldap-auth ldap-auth software is for authenticating users who request protected resources from servers proxied by nginx. It includes a daemon (ldap-auth) that communicates with an authentication server, and a webserver daemon that generates an authentication cookie based on the user\u2019s credentials. The daemons are written in Python for use with a Lightweight Directory Access Protocol (LDAP) authentication server (OpenLDAP or Microsoft Windows Active Directory 2003 and 2012). netbootxyz netbootxyz is a way to PXE boot various operating system installers or utilities from one place within the BIOS without the need of having to go retrieve the media to run the tool. iPXE is used to provide a user friendly menu from within the BIOS that lets you easily choose the operating system you want along with any specific types of versions or bootable flags. netbox netbox is an IP address management (IPAM) and data center infrastructure management (DCIM) tool. Initially conceived by the network engineering team at DigitalOcean, NetBox was developed specifically to address the needs of network and infrastructure engineers. It is intended to function as a domain-specific source of truth for network operations. openssh-server openssh-server is a sandboxed environment that allows ssh access without giving keys to the entire server. snipe-it snipe-it makes asset management easy. It was built by people solving real-world IT and asset management problems, and a solid UX has always been a top priority. Straightforward design and bulk actions mean getting things done faster. Audiobooks Container Description booksonic-air booksonic-air is a platform for accessing the audiobooks you own wherever you are. At the moment the platform consists of Automation Container Description domoticz domoticz is a Home Automation System that lets you monitor and configure various devices like: Lights, Switches, various sensors/meters like Temperature, Rain, Wind, UV, Electra, Gas, Water and much more. Notifications/Alerts can be sent to any mobile device. habridge habridge emulates Philips Hue API to other home automation gateways such as an Amazon Echo/Dot Gen 1 (gen 2 has issues discovering ha-bridge) or other systems that support Philips Hue. The Bridge handles basic commands such as \"On\", \"Off\" and \"brightness\" commands of the hue protocol. This bridge can control most devices that have a distinct API. homeassistant Home Assistant Core - Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server. kanzi kanzi , formerly titled Kodi-Alexa, this custom skill is the ultimate voice remote control for navigating Kodi. It can do anything you can think of (100+ intents). This container also contains lexigram-cli to setup Kanzi with an Amazon Developer Account and automatically deploy it to Amazon. Backup Container Description duplicati duplicati works with standard protocols like FTP, SSH, WebDAV as well as popular services like Microsoft OneDrive, Amazon Cloud Drive & S3, Google Drive, box.com, Mega, hubiC and many others. resilio-sync resilio-sync (formerly BitTorrent Sync) uses the BitTorrent protocol to sync files and folders between all of your devices. There are both free and paid versions, this container supports both. There is an official sync image but we created this one as it supports user mapping to simplify permissions for volumes. rsnapshot rsnapshot is a filesystem snapshot utility based on rsync. rsnapshot makes it easy to make periodic snapshots of local machines, and remote machines over ssh. The code makes extensive use of hard links whenever possible, to greatly reduce the disk space required.\" syncthing syncthing replaces proprietary sync and cloud services with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it's transmitted over the Internet. Books Container Description calibre calibre is a powerful and easy to use e-book manager. Users say it\u2019s outstanding and a must-have. It\u2019ll allow you to do nearly everything and it takes things a step beyond normal e-book software. It\u2019s also completely free and open source and great for both casual users and computer experts. calibre-web calibre-web is a web app providing a clean interface for browsing, reading and downloading eBooks using an existing Calibre database. It is also possible to integrate google drive and edit metadata and your calibre library through the app itself. cops cops by S\u00e9bastien Lucas, stands for Calibre OPDS (and HTML) Php Server. lazylibrarian lazylibrarian is a program to follow authors and grab metadata for all your digital reading needs. It uses a combination of Goodreads Librarything and optionally GoogleBooks as sources for author info and book info. This container is based on the DobyTang fork. mylar3 mylar3 is an automated Comic Book downloader (cbr/cbz) for use with NZB and torrents written in python. It supports SABnzbd, NZBGET, and many torrent clients in addition to DDL. readarr readarr - Book Manager and Automation (Sonarr for Ebooks) ubooquity ubooquity is a free, lightweight and easy-to-use home server for your comics and ebooks. Use it to access your files from anywhere, with a tablet, an e-reader, a phone or a computer. Cloud Container Description nextcloud nextcloud gives you access to all your files wherever you are. Crypto Container Description gmail-order-bot gmail-order-bot - A bot used to leverage a Gmail account as an order messaging service to consume email orders from Nano Checkout and process them using any custom logic you choose. nano nano is a digital payment protocol designed to be accessible and lightweight, with a focus on removing inefficiencies present in other cryptocurrencies. With ultrafast transactions and zero fees on a secure, green and decentralized network, this makes Nano ideal for everyday transactions. nano-discord-bot nano-discord-bot - A bot used to hook into a self hosted Nano RPC endpoint and discord server to Distribute funds from a faucet account. nano-wallet nano-wallet is a digital payment protocol designed to be accessible and lightweight, with a focus on removing inefficiencies present in other cryptocurrencies. With ultrafast transactions and zero fees on a secure, green and decentralized network, this makes Nano ideal for everyday transactions. DNS Container Description adguardhome-sync adguardhome-sync is a tool to synchronize AdGuardHome config to replica instances. ddclient ddclient is a Perl client used to update dynamic DNS entries for accounts on Dynamic DNS Network Service Provider. It was originally written by Paul Burry and is now mostly by wimpunk. It has the capability to update more than just dyndns and it can fetch your WAN-ipaddress in a few different ways. duckdns duckdns is a free service which will point a DNS (sub domains of duckdns.org) to an IP of your choice. The service is completely free, and doesn't require reactivation or forum posts to maintain its existence. Dashboard Container Description heimdall heimdall is a way to organise all those links to your most used web sites and web applications in a simple way. muximux muximux is a lightweight portal to view & manage your HTPC apps without having to run anything more than a PHP enabled webserver. With Muximux you don't need to keep multiple tabs open, or bookmark the URL to all of your apps. Databases Container Description mariadb mariadb is one of the most popular database servers. Made by the original developers of MySQL. mysql-workbench MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. MySQL Workbench provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, backup, and much more. phpmyadmin phpmyadmin is a free software tool written in PHP, intended to handle the administration of MySQL over the Web. phpMyAdmin supports a wide range of operations on MySQL and MariaDB. sqlitebrowser DB Browser for SQLite is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite. Docker Container Description docker-compose No description fleet fleet provides an online web interface which displays a set of maintained images from one or more owned repositories. Documents Container Description libreoffice LibreOffice is a free and powerful office suite, and a successor to OpenOffice.org (commonly known as OpenOffice). Its clean interface and feature-rich tools help you unleash your creativity and enhance your productivity. paperless-ng paperless-ng is an application by Daniel Quinn and contributors that indexes your scanned documents and allows you to easily search for documents and store metadata alongside your documents.\" paperless-ngx paperless-ngx is an application by Daniel Quinn and contributors that indexes your scanned documents and allows you to easily search for documents and store metadata alongside your documents.\" papermerge papermerge is an open source document management system (DMS) primarily designed for archiving and retrieving your digital documents. Instead of having piles of paper documents all over your desk, office or drawers - you can quickly scan them and configure your scanner to directly upload to Papermerge DMS.\" Downloaders Container Description deluge deluge is a lightweight, Free Software, cross-platform BitTorrent client. nntp2nntp nntp2nntp proxy allow you to use your NNTP Account from multiple systems, each with own user name and password. It fully supports SSL and you can also limit the access to proxy with SSL certificates. nntp2nntp proxy is very simple and pretty fast. nzbget nzbget is a usenet downloader, written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources. pyload-ng pyLoad is a Free and Open Source download manager written in Python and designed to be extremely lightweight, easily extensible and fully manageable via web. qbittorrent The qbittorrent project aims to provide an open-source software alternative to \u00b5Torrent. qBittorrent is based on the Qt toolkit and libtorrent-rasterbar library. sabnzbd sabnzbd makes Usenet as simple and streamlined as possible by automating everything we can. All you have to do is add an .nzb. SABnzbd takes over from there, where it will be automatically downloaded, verified, repaired, extracted and filed away with zero human interaction. transmission transmission is designed for easy, powerful use. Transmission has the features you want from a BitTorrent client: encryption, a web interface, peer exchange, magnet links, DHT, \u00b5TP, UPnP and NAT-PMP port forwarding, webseed support, watch directories, tracker editing, global and per-torrent speed limits, and more. FTP Container Description davos davos is an FTP automation tool that periodically scans given host locations for new files. It can be configured for various purposes, including listening for specific files to appear in the host location, ready for it to download and then move, if required. It also supports completion notifications as well as downstream API calls, to further the workflow. filezilla FIleZilla Client is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface. Family Container Description babybuddy babybuddy is a buddy for babies! Helps caregivers track sleep, feedings, diaper changes, tummy time and more to learn about and predict baby's needs without (as much) guess work. File Sharing Container Description projectsend projectsend is a self-hosted application that lets you upload files and assign them to specific clients that you create yourself. Secure, private and easy. No more depending on external services or e-mail to send those files. pwndrop pwndrop is a self-deployable file hosting service for sending out red teaming payloads or securely sharing your private files over HTTP and WebDAV. pydio-cells pydio-cells is the nextgen file sharing platform for organizations. It is a full rewrite of the Pydio project using the Go language following a micro-service architecture. snapdrop snapdrop A local file sharing in your browser. Inspired by Apple's Airdrop. xbackbone xbackbone is a simple, self-hosted, lightweight PHP file manager that support the instant sharing tool ShareX and *NIX systems. It supports uploading and displaying images, GIF, video, code, formatted text, and file downloading and uploading. Also have a web UI with multi user management, past uploads history and search support. Finance Container Description budge budge is an open source 'budgeting with envelopes' personal finance app. Games Container Description emulatorjs emulatorjs - In browser web based emulation portable to nearly any device for many retro consoles. A mix of emulators is used between Libretro and EmulatorJS. minetest minetest (server) is a near-infinite-world block sandbox game and a game engine, inspired by InfiniMiner, Minecraft, and the like. Graphics Container Description blender Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality, and computer games. This image does not support GPU rendering out of the box only accelerated workspace experience kdenlive Kdenlive is a powerful free and open source cross-platform video editing program made by the KDE community. Feature rich and production ready. IRC Container Description limnoria limnoria A robust, full-featured, and user/programmer-friendly Python IRC bot, with many existing plugins. Successor of the well-known Supybot. ngircd ngircd is a free, portable and lightweight Internet Relay Chat server for small or private networks, developed under the GNU General Public License (GPL). It is easy to configure, can cope with dynamic IP addresses, and supports IPv6, SSL-protected connections as well as PAM for authentication. It is written from scratch and not based on the original IRCd. pidgin Pidgin is a chat program which lets you log into accounts on multiple chat networks simultaneously. This means that you can be chatting with friends on XMPP and sitting in an IRC channel at the same time. quassel-core quassel-core is a modern, cross-platform, distributed IRC client, meaning that one (or multiple) client(s) can attach to and detach from a central core. quassel-web quassel-web is a web client for Quassel. Note that a Quassel-Core instance is required, we have a container available here. thelounge thelounge (a fork of shoutIRC) is a web IRC client that you host on your own server. znc znc is an IRC network bouncer or BNC. It can detach the client from the actual IRC server, and also from selected channels. Multiple clients from different locations can connect to a single ZNC account simultaneously and therefore appear under the same nickname on IRC. Indexers Container Description jackett jackett works as a proxy server: it translates queries from apps (Sonarr, SickRage, CouchPotato, Mylar, etc) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software. This allows for getting recent uploads (like RSS) and performing searches. Jackett is a single repository of maintained indexer scraping & translation logic - removing the burden from other apps. nzbhydra2 nzbhydra2 is a meta search application for NZB indexers, the \"spiritual successor\" to NZBmegasearcH, and an evolution of the original application NZBHydra . prowlarr prowlarr is a indexer manager/proxy built on the popular arr .net/reactjs base stack to integrate with your various PVR apps. Prowlarr supports both Torrent Trackers and Usenet Indexers. It integrates seamlessly with Sonarr, Radarr, Lidarr, and Readarr offering complete management of your indexers with no per app Indexer setup required (we do it all). Media Management Container Description bazarr bazarr is a companion application to Sonarr and Radarr. It can manage and download subtitles based on your requirements. You define your preferences by TV show or movie and Bazarr takes care of everything for you. medusa medusa is an automatic Video Library Manager for TV Shows. It watches for new episodes of your favorite shows, and when they are posted it does its magic. plex-meta-manager plex-meta-manager is a Python 3 script that can be continuously run using YAML configuration files to update on a schedule the metadata of the movies, shows, and collections in your libraries as well as automatically build collections based on various methods all detailed in the wiki. radarr radarr - A fork of Sonarr to work with movies \u00e0 la Couchpotato. sickchill sickchill is an Automatic Video Library Manager for TV Shows. It watches for new episodes of your favorite shows, and when they are posted it does its magic. sickgear SickGear provides management of TV shows and/or Anime, it detects new episodes, links downloader apps, and more.. sonarr sonarr (formerly NZBdrone) is a PVR for usenet and bittorrent users. It can monitor multiple RSS feeds for new episodes of your favorite shows and will grab, sort and rename them. It can also be configured to automatically upgrade the quality of files already downloaded when a better quality format becomes available. Media Players Container Description emby emby organizes video, music, live TV, and photos from personal media libraries and streams them to smart TVs, streaming boxes and mobile devices. This container is packaged as a standalone emby Media Server. jellyfin jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps. Jellyfin is descended from Emby's 3.5.2 release and ported to the .NET Core framework to enable full cross-platform support. There are no strings attached, no premium licenses or features, and no hidden agendas: just a team who want to build something better and work together to achieve it. plex plex organizes video, music and photos from personal media libraries and streams them to smart TVs, streaming boxes and mobile devices. This container is packaged as a standalone Plex Media Server. has always been a top priority. Straightforward design and bulk actions mean getting things done faster. Media Requesters Container Description doplarr doplarr is an *arr request bot for Discord.\" ombi ombi allows you to host your own Plex Request and user management system. overseerr overseerr is a request management and media discovery tool built to work with your existing Plex ecosystem. Media Tools Container Description embystat embystat is a personal web server that can calculate all kinds of statistics from your (local) Emby server. Just install this on your server and let him calculate all kinds of fun stuff. ffmpeg No description htpcmanager htpcmanager is a front end for many htpc related applications. minisatip minisatip is a multi-threaded satip server version 1.2 that runs under Linux and it was tested with DVB-S, DVB-S2, DVB-T, DVB-T2, DVB-C, DVB-C2, ATSC and ISDB-T cards. oscam oscam is an Open Source Conditional Access Module software used for descrambling DVB transmissions using smart cards. It's both a server and a client. synclounge synclounge is a third party tool that allows you to watch Plex in sync with your friends/family, wherever you are. tautulli tautulli is a python based web application for monitoring, analytics and notifications for Plex Media Server. tvheadend tvheadend works as a proxy server: is a TV streaming server and recorder for Linux, FreeBSD and Android supporting DVB-S, DVB-S2, DVB-C, DVB-T, ATSC, ISDB-T, IPTV, SAT>IP and HDHomeRun as input sources. webgrabplus webgrabplus is a multi-site incremental xmltv epg grabber. It collects tv-program guide data from selected tvguide sites for your favourite channels. Monitor Container Description apprise-api apprise-api Takes advantage of Apprise through your network with a user-friendly API. healthchecks healthchecks is a watchdog for your cron jobs. It's a web server that listens for pings from your cron jobs, plus a web interface. librespeed librespeed is a very lightweight Speedtest implemented in Javascript, using XMLHttpRequest and Web Workers. smokeping smokeping keeps track of your network latency. For a full example of what this application is capable of visit UCDavis . syslog-ng syslog-ng allows you to flexibly collect, parse, classify, rewrite and correlate logs from across your infrastructure and store or route them to log analysis tools. Music Container Description airsonic-advanced airsonic-advanced is a free, web-based media streamer, providing ubiquitious access to your music. Use it to share your music with friends, or to listen to your own music while at work. You can stream to multiple players simultaneously, for instance to one player in your kitchen and another in your living room. audacity Audacity is an easy-to-use, multi-track audio editor and recorder. Developed by a group of volunteers as open source. beets beets is a music library manager and not, for the most part, a music player. It does include a simple player plugin and an experimental Web-based player, but it generally leaves actual sound-reproduction to specialized tools. daapd daapd (iTunes) media server with support for AirPlay devices, Apple Remote (and compatibles), Chromecast, MPD and internet radio. headphones headphones is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, \u00b5Torrent and Blackhole. lidarr lidarr is a music collection manager for Usenet and BitTorrent users. It can monitor multiple RSS feeds for new tracks from your favorite artists and will grab, sort and rename them. It can also be configured to automatically upgrade the quality of files already downloaded when a better quality format becomes available. mstream mstream is a personal music streaming server. You can use mStream to stream your music from your home computer to any device, anywhere. There are mobile apps available for both Android and iPhone. Network Container Description unifi-controller The unifi-controller software is a powerful, enterprise wireless software engine ideal for high-density client deployments requiring low latency and high uptime performance. wireshark Wireshark is the world\u2019s foremost and widely-used network protocol analyzer. It lets you see what\u2019s happening on your network at a microscopic level and is the de facto (and often de jure) standard across many commercial and non-profit enterprises, government agencies, and educational institutions. Wireshark development thrives thanks to the volunteer contributions of networking experts around the globe and is the continuation of a project started by Gerald Combs in 1998. Photos Container Description chevereto chevereto is an image hosting software that allows you to create a beautiful and full-featured image hosting website on your own server. It's your hosting and your rules, so say goodbye to closures and restrictions. darktable darktable is an open source photography workflow application and raw developer. A virtual lighttable and darkroom for photographers. It manages your digital negatives in a database, lets you view them through a zoomable lighttable and enables you to develop raw images and enhance them. digikam digiKam : Professional Photo Management with the Power of Open Source lychee lychee is a free photo-management tool, which runs on your server or web-space. Installing is a matter of seconds. Upload, manage and share photos like from a native application. Lychee comes with everything you need and all your photos are stored securely.\" photoshow photoshow is gallery software at its easiest, it doesn't even require a database. piwigo piwigo is a photo gallery software for the web that comes with powerful features to publish and manage your collection of pictures. pixapop pixapop is an open-source single page application to view your photos in the easiest way possible. Programming Container Description cloud9 cloud9 Cloud9 is a complete web based IDE with terminal access. This container is for running their core SDK locally and developing plugins. code-server code-server is VS Code running on a remote server, accessible through the browser. openvscode-server openvscode-server provides a version of VS Code that runs a server on a remote machine and allows access through a modern web browser. pylon pylon is a web based integrated development environment built with Node.js as a backend and with a supercharged JavaScript/HTML5 frontend, licensed under GPL version 3. This project originates from Cloud9 v2 project. RSS Container Description freshrss freshrss is a free, self-hostable aggregator for rss feeds. Recipes Container Description grocy grocy is an ERP system for your kitchen! Cut down on food waste, and manage your chores with this brilliant utility. Remote Container Description guacd guacd - Apache Guacamole is a clientless remote desktop gateway. It supports standard protocols like VNC, RDP, and SSH. This container is only the backend server component needed to use The official or 3rd party HTML5 frontends. rdesktop rdesktop - Containers containing full desktop environments in many popular flavors for Alpine, Ubuntu, Arch, and Fedora accessible via RDP. remmina Remmina is a remote desktop client written in GTK, aiming to be useful for system administrators and travellers, who need to work with lots of remote computers in front of either large or tiny screens. Remmina supports multiple network protocols, in an integrated and consistent user interface. Currently RDP, VNC, SPICE, NX, XDMCP, SSH and EXEC are supported. webtop webtop - Alpine, Ubuntu, Fedora, and Arch based containers containing full desktop environments in officially supported flavors accessible via any modern web browser. Science Container Description boinc BOINC is a platform for high-throughput computing on a large scale (thousands or millions of computers). It can be used for volunteer computing (using consumer devices) or grid computing (using organizational resources). It supports virtualized, parallel, and GPU-based applications. foldingathome Folding@home is a distributed computing project for simulating protein dynamics, including the process of protein folding and the movements of proteins implicated in a variety of diseases. It brings together citizen scientists who volunteer to run simulations of protein dynamics on their personal computers. Insights from this data are helping scientists to better understand biology, and providing new opportunities for developing therapeutics. Storage Container Description diskover diskover is an open source file system indexer that uses Elasticsearch to index and manage data across heterogeneous storage systems. qdirstat QDirStat Qt-based directory statistics: KDirStat without any KDE -- from the author of the original KDirStat. scrutiny scrutiny WebUI for smartd S.M.A.R.T monitoring. Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates from Backblaze. Tools Container Description yq No description VPN Container Description wireguard WireGuard\u00ae is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. It aims to be faster, simpler, leaner, and more useful than IPsec, while avoiding the massive headache. It intends to be considerably more performant than OpenVPN. WireGuard is designed as a general purpose VPN for running on embedded interfaces and super computers alike, fit for many different circumstances. Initially released for the Linux kernel, it is now cross-platform (Windows, macOS, BSD, iOS, Android) and widely deployable. It is currently under heavy development, but already it might be regarded as the most secure, easiest to use, and simplest VPN solution in the industry. Web Container Description firefox Firefox Browser, also known as Mozilla Firefox or simply Firefox, is a free and open-source web browser developed by the Mozilla Foundation and its subsidiary, the Mozilla Corporation. Firefox uses the Gecko layout engine to render web pages, which implements current and anticipated web standards. grav grav is a Fast, Simple, and Flexible, file-based Web-platform. nginx nginx is a simple webserver with php support. The config files reside in /config for easy user customization. swag SWAG - Secure Web Application Gateway (formerly known as letsencrypt, no relation to Let's Encrypt\u2122) sets up an Nginx webserver and reverse proxy with php support and a built-in certbot client that automates free SSL server certificate generation and renewal processes (Let's Encrypt and ZeroSSL). It also contains fail2ban for intrusion prevention. Wiki Container Description bookstack bookstack is a free and open source Wiki designed for creating beautiful documentation. Featuring a simple, but powerful WYSIWYG editor it allows for teams to create detailed and useful documentation with ease. dillinger dillinger is a cloud-enabled, mobile-ready, offline-storage, AngularJS powered HTML5 Markdown editor. dokuwiki dokuwiki is a simple to use and highly versatile Open Source wiki software that doesn't require a database. It is loved by users for its clean and readable syntax. The ease of maintenance, backup and integration makes it an administrator's favorite. Built in access controls and authentication connectors make DokuWiki especially useful in the enterprise context and the large number of plugins contributed by its vibrant community allow for a broad range of use cases beyond a traditional wiki. hedgedoc HedgeDoc gives you access to all your files wherever you are. raneto raneto - is an open source Knowledgebase platform that uses static Markdown files to power your Knowledgebase. wikijs wikijs A modern, lightweight and powerful wiki app built on NodeJS.","title":"Awesome LSIO"},{"location":"general/awesome-lsio/#awesome-lsio","text":"","title":"Awesome LSIO"},{"location":"general/awesome-lsio/#administration","text":"Container Description doublecommander Double Commander is a free cross platform open source file manager with two panels side by side. It is inspired by Total Commander and features some new ideas. endlessh endlessh is an SSH tarpit that very slowly sends an endless, random SSH banner. It keeps SSH clients locked up for hours or even days at a time. The purpose is to put your real SSH server on another port and then let the script kiddies get stuck in this tarpit instead of bothering a real server. ldap-auth ldap-auth software is for authenticating users who request protected resources from servers proxied by nginx. It includes a daemon (ldap-auth) that communicates with an authentication server, and a webserver daemon that generates an authentication cookie based on the user\u2019s credentials. The daemons are written in Python for use with a Lightweight Directory Access Protocol (LDAP) authentication server (OpenLDAP or Microsoft Windows Active Directory 2003 and 2012). netbootxyz netbootxyz is a way to PXE boot various operating system installers or utilities from one place within the BIOS without the need of having to go retrieve the media to run the tool. iPXE is used to provide a user friendly menu from within the BIOS that lets you easily choose the operating system you want along with any specific types of versions or bootable flags. netbox netbox is an IP address management (IPAM) and data center infrastructure management (DCIM) tool. Initially conceived by the network engineering team at DigitalOcean, NetBox was developed specifically to address the needs of network and infrastructure engineers. It is intended to function as a domain-specific source of truth for network operations. openssh-server openssh-server is a sandboxed environment that allows ssh access without giving keys to the entire server. snipe-it snipe-it makes asset management easy. It was built by people solving real-world IT and asset management problems, and a solid UX has always been a top priority. Straightforward design and bulk actions mean getting things done faster.","title":"Administration"},{"location":"general/awesome-lsio/#audiobooks","text":"Container Description booksonic-air booksonic-air is a platform for accessing the audiobooks you own wherever you are. At the moment the platform consists of","title":"Audiobooks"},{"location":"general/awesome-lsio/#automation","text":"Container Description domoticz domoticz is a Home Automation System that lets you monitor and configure various devices like: Lights, Switches, various sensors/meters like Temperature, Rain, Wind, UV, Electra, Gas, Water and much more. Notifications/Alerts can be sent to any mobile device. habridge habridge emulates Philips Hue API to other home automation gateways such as an Amazon Echo/Dot Gen 1 (gen 2 has issues discovering ha-bridge) or other systems that support Philips Hue. The Bridge handles basic commands such as \"On\", \"Off\" and \"brightness\" commands of the hue protocol. This bridge can control most devices that have a distinct API. homeassistant Home Assistant Core - Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server. kanzi kanzi , formerly titled Kodi-Alexa, this custom skill is the ultimate voice remote control for navigating Kodi. It can do anything you can think of (100+ intents). This container also contains lexigram-cli to setup Kanzi with an Amazon Developer Account and automatically deploy it to Amazon.","title":"Automation"},{"location":"general/awesome-lsio/#backup","text":"Container Description duplicati duplicati works with standard protocols like FTP, SSH, WebDAV as well as popular services like Microsoft OneDrive, Amazon Cloud Drive & S3, Google Drive, box.com, Mega, hubiC and many others. resilio-sync resilio-sync (formerly BitTorrent Sync) uses the BitTorrent protocol to sync files and folders between all of your devices. There are both free and paid versions, this container supports both. There is an official sync image but we created this one as it supports user mapping to simplify permissions for volumes. rsnapshot rsnapshot is a filesystem snapshot utility based on rsync. rsnapshot makes it easy to make periodic snapshots of local machines, and remote machines over ssh. The code makes extensive use of hard links whenever possible, to greatly reduce the disk space required.\" syncthing syncthing replaces proprietary sync and cloud services with something open, trustworthy and decentralized. Your data is your data alone and you deserve to choose where it is stored, if it is shared with some third party and how it's transmitted over the Internet.","title":"Backup"},{"location":"general/awesome-lsio/#books","text":"Container Description calibre calibre is a powerful and easy to use e-book manager. Users say it\u2019s outstanding and a must-have. It\u2019ll allow you to do nearly everything and it takes things a step beyond normal e-book software. It\u2019s also completely free and open source and great for both casual users and computer experts. calibre-web calibre-web is a web app providing a clean interface for browsing, reading and downloading eBooks using an existing Calibre database. It is also possible to integrate google drive and edit metadata and your calibre library through the app itself. cops cops by S\u00e9bastien Lucas, stands for Calibre OPDS (and HTML) Php Server. lazylibrarian lazylibrarian is a program to follow authors and grab metadata for all your digital reading needs. It uses a combination of Goodreads Librarything and optionally GoogleBooks as sources for author info and book info. This container is based on the DobyTang fork. mylar3 mylar3 is an automated Comic Book downloader (cbr/cbz) for use with NZB and torrents written in python. It supports SABnzbd, NZBGET, and many torrent clients in addition to DDL. readarr readarr - Book Manager and Automation (Sonarr for Ebooks) ubooquity ubooquity is a free, lightweight and easy-to-use home server for your comics and ebooks. Use it to access your files from anywhere, with a tablet, an e-reader, a phone or a computer.","title":"Books"},{"location":"general/awesome-lsio/#cloud","text":"Container Description nextcloud nextcloud gives you access to all your files wherever you are.","title":"Cloud"},{"location":"general/awesome-lsio/#crypto","text":"Container Description gmail-order-bot gmail-order-bot - A bot used to leverage a Gmail account as an order messaging service to consume email orders from Nano Checkout and process them using any custom logic you choose. nano nano is a digital payment protocol designed to be accessible and lightweight, with a focus on removing inefficiencies present in other cryptocurrencies. With ultrafast transactions and zero fees on a secure, green and decentralized network, this makes Nano ideal for everyday transactions. nano-discord-bot nano-discord-bot - A bot used to hook into a self hosted Nano RPC endpoint and discord server to Distribute funds from a faucet account. nano-wallet nano-wallet is a digital payment protocol designed to be accessible and lightweight, with a focus on removing inefficiencies present in other cryptocurrencies. With ultrafast transactions and zero fees on a secure, green and decentralized network, this makes Nano ideal for everyday transactions.","title":"Crypto"},{"location":"general/awesome-lsio/#dns","text":"Container Description adguardhome-sync adguardhome-sync is a tool to synchronize AdGuardHome config to replica instances. ddclient ddclient is a Perl client used to update dynamic DNS entries for accounts on Dynamic DNS Network Service Provider. It was originally written by Paul Burry and is now mostly by wimpunk. It has the capability to update more than just dyndns and it can fetch your WAN-ipaddress in a few different ways. duckdns duckdns is a free service which will point a DNS (sub domains of duckdns.org) to an IP of your choice. The service is completely free, and doesn't require reactivation or forum posts to maintain its existence.","title":"DNS"},{"location":"general/awesome-lsio/#dashboard","text":"Container Description heimdall heimdall is a way to organise all those links to your most used web sites and web applications in a simple way. muximux muximux is a lightweight portal to view & manage your HTPC apps without having to run anything more than a PHP enabled webserver. With Muximux you don't need to keep multiple tabs open, or bookmark the URL to all of your apps.","title":"Dashboard"},{"location":"general/awesome-lsio/#databases","text":"Container Description mariadb mariadb is one of the most popular database servers. Made by the original developers of MySQL. mysql-workbench MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. MySQL Workbench provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, backup, and much more. phpmyadmin phpmyadmin is a free software tool written in PHP, intended to handle the administration of MySQL over the Web. phpMyAdmin supports a wide range of operations on MySQL and MariaDB. sqlitebrowser DB Browser for SQLite is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite.","title":"Databases"},{"location":"general/awesome-lsio/#docker","text":"Container Description docker-compose No description fleet fleet provides an online web interface which displays a set of maintained images from one or more owned repositories.","title":"Docker"},{"location":"general/awesome-lsio/#documents","text":"Container Description libreoffice LibreOffice is a free and powerful office suite, and a successor to OpenOffice.org (commonly known as OpenOffice). Its clean interface and feature-rich tools help you unleash your creativity and enhance your productivity. paperless-ng paperless-ng is an application by Daniel Quinn and contributors that indexes your scanned documents and allows you to easily search for documents and store metadata alongside your documents.\" paperless-ngx paperless-ngx is an application by Daniel Quinn and contributors that indexes your scanned documents and allows you to easily search for documents and store metadata alongside your documents.\" papermerge papermerge is an open source document management system (DMS) primarily designed for archiving and retrieving your digital documents. Instead of having piles of paper documents all over your desk, office or drawers - you can quickly scan them and configure your scanner to directly upload to Papermerge DMS.\"","title":"Documents"},{"location":"general/awesome-lsio/#downloaders","text":"Container Description deluge deluge is a lightweight, Free Software, cross-platform BitTorrent client. nntp2nntp nntp2nntp proxy allow you to use your NNTP Account from multiple systems, each with own user name and password. It fully supports SSL and you can also limit the access to proxy with SSL certificates. nntp2nntp proxy is very simple and pretty fast. nzbget nzbget is a usenet downloader, written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources. pyload-ng pyLoad is a Free and Open Source download manager written in Python and designed to be extremely lightweight, easily extensible and fully manageable via web. qbittorrent The qbittorrent project aims to provide an open-source software alternative to \u00b5Torrent. qBittorrent is based on the Qt toolkit and libtorrent-rasterbar library. sabnzbd sabnzbd makes Usenet as simple and streamlined as possible by automating everything we can. All you have to do is add an .nzb. SABnzbd takes over from there, where it will be automatically downloaded, verified, repaired, extracted and filed away with zero human interaction. transmission transmission is designed for easy, powerful use. Transmission has the features you want from a BitTorrent client: encryption, a web interface, peer exchange, magnet links, DHT, \u00b5TP, UPnP and NAT-PMP port forwarding, webseed support, watch directories, tracker editing, global and per-torrent speed limits, and more.","title":"Downloaders"},{"location":"general/awesome-lsio/#ftp","text":"Container Description davos davos is an FTP automation tool that periodically scans given host locations for new files. It can be configured for various purposes, including listening for specific files to appear in the host location, ready for it to download and then move, if required. It also supports completion notifications as well as downstream API calls, to further the workflow. filezilla FIleZilla Client is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface.","title":"FTP"},{"location":"general/awesome-lsio/#family","text":"Container Description babybuddy babybuddy is a buddy for babies! Helps caregivers track sleep, feedings, diaper changes, tummy time and more to learn about and predict baby's needs without (as much) guess work.","title":"Family"},{"location":"general/awesome-lsio/#file-sharing","text":"Container Description projectsend projectsend is a self-hosted application that lets you upload files and assign them to specific clients that you create yourself. Secure, private and easy. No more depending on external services or e-mail to send those files. pwndrop pwndrop is a self-deployable file hosting service for sending out red teaming payloads or securely sharing your private files over HTTP and WebDAV. pydio-cells pydio-cells is the nextgen file sharing platform for organizations. It is a full rewrite of the Pydio project using the Go language following a micro-service architecture. snapdrop snapdrop A local file sharing in your browser. Inspired by Apple's Airdrop. xbackbone xbackbone is a simple, self-hosted, lightweight PHP file manager that support the instant sharing tool ShareX and *NIX systems. It supports uploading and displaying images, GIF, video, code, formatted text, and file downloading and uploading. Also have a web UI with multi user management, past uploads history and search support.","title":"File Sharing"},{"location":"general/awesome-lsio/#finance","text":"Container Description budge budge is an open source 'budgeting with envelopes' personal finance app.","title":"Finance"},{"location":"general/awesome-lsio/#games","text":"Container Description emulatorjs emulatorjs - In browser web based emulation portable to nearly any device for many retro consoles. A mix of emulators is used between Libretro and EmulatorJS. minetest minetest (server) is a near-infinite-world block sandbox game and a game engine, inspired by InfiniMiner, Minecraft, and the like.","title":"Games"},{"location":"general/awesome-lsio/#graphics","text":"Container Description blender Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality, and computer games. This image does not support GPU rendering out of the box only accelerated workspace experience kdenlive Kdenlive is a powerful free and open source cross-platform video editing program made by the KDE community. Feature rich and production ready.","title":"Graphics"},{"location":"general/awesome-lsio/#irc","text":"Container Description limnoria limnoria A robust, full-featured, and user/programmer-friendly Python IRC bot, with many existing plugins. Successor of the well-known Supybot. ngircd ngircd is a free, portable and lightweight Internet Relay Chat server for small or private networks, developed under the GNU General Public License (GPL). It is easy to configure, can cope with dynamic IP addresses, and supports IPv6, SSL-protected connections as well as PAM for authentication. It is written from scratch and not based on the original IRCd. pidgin Pidgin is a chat program which lets you log into accounts on multiple chat networks simultaneously. This means that you can be chatting with friends on XMPP and sitting in an IRC channel at the same time. quassel-core quassel-core is a modern, cross-platform, distributed IRC client, meaning that one (or multiple) client(s) can attach to and detach from a central core. quassel-web quassel-web is a web client for Quassel. Note that a Quassel-Core instance is required, we have a container available here. thelounge thelounge (a fork of shoutIRC) is a web IRC client that you host on your own server. znc znc is an IRC network bouncer or BNC. It can detach the client from the actual IRC server, and also from selected channels. Multiple clients from different locations can connect to a single ZNC account simultaneously and therefore appear under the same nickname on IRC.","title":"IRC"},{"location":"general/awesome-lsio/#indexers","text":"Container Description jackett jackett works as a proxy server: it translates queries from apps (Sonarr, SickRage, CouchPotato, Mylar, etc) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software. This allows for getting recent uploads (like RSS) and performing searches. Jackett is a single repository of maintained indexer scraping & translation logic - removing the burden from other apps. nzbhydra2 nzbhydra2 is a meta search application for NZB indexers, the \"spiritual successor\" to NZBmegasearcH, and an evolution of the original application NZBHydra . prowlarr prowlarr is a indexer manager/proxy built on the popular arr .net/reactjs base stack to integrate with your various PVR apps. Prowlarr supports both Torrent Trackers and Usenet Indexers. It integrates seamlessly with Sonarr, Radarr, Lidarr, and Readarr offering complete management of your indexers with no per app Indexer setup required (we do it all).","title":"Indexers"},{"location":"general/awesome-lsio/#media-management","text":"Container Description bazarr bazarr is a companion application to Sonarr and Radarr. It can manage and download subtitles based on your requirements. You define your preferences by TV show or movie and Bazarr takes care of everything for you. medusa medusa is an automatic Video Library Manager for TV Shows. It watches for new episodes of your favorite shows, and when they are posted it does its magic. plex-meta-manager plex-meta-manager is a Python 3 script that can be continuously run using YAML configuration files to update on a schedule the metadata of the movies, shows, and collections in your libraries as well as automatically build collections based on various methods all detailed in the wiki. radarr radarr - A fork of Sonarr to work with movies \u00e0 la Couchpotato. sickchill sickchill is an Automatic Video Library Manager for TV Shows. It watches for new episodes of your favorite shows, and when they are posted it does its magic. sickgear SickGear provides management of TV shows and/or Anime, it detects new episodes, links downloader apps, and more.. sonarr sonarr (formerly NZBdrone) is a PVR for usenet and bittorrent users. It can monitor multiple RSS feeds for new episodes of your favorite shows and will grab, sort and rename them. It can also be configured to automatically upgrade the quality of files already downloaded when a better quality format becomes available.","title":"Media Management"},{"location":"general/awesome-lsio/#media-players","text":"Container Description emby emby organizes video, music, live TV, and photos from personal media libraries and streams them to smart TVs, streaming boxes and mobile devices. This container is packaged as a standalone emby Media Server. jellyfin jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps. Jellyfin is descended from Emby's 3.5.2 release and ported to the .NET Core framework to enable full cross-platform support. There are no strings attached, no premium licenses or features, and no hidden agendas: just a team who want to build something better and work together to achieve it. plex plex organizes video, music and photos from personal media libraries and streams them to smart TVs, streaming boxes and mobile devices. This container is packaged as a standalone Plex Media Server. has always been a top priority. Straightforward design and bulk actions mean getting things done faster.","title":"Media Players"},{"location":"general/awesome-lsio/#media-requesters","text":"Container Description doplarr doplarr is an *arr request bot for Discord.\" ombi ombi allows you to host your own Plex Request and user management system. overseerr overseerr is a request management and media discovery tool built to work with your existing Plex ecosystem.","title":"Media Requesters"},{"location":"general/awesome-lsio/#media-tools","text":"Container Description embystat embystat is a personal web server that can calculate all kinds of statistics from your (local) Emby server. Just install this on your server and let him calculate all kinds of fun stuff. ffmpeg No description htpcmanager htpcmanager is a front end for many htpc related applications. minisatip minisatip is a multi-threaded satip server version 1.2 that runs under Linux and it was tested with DVB-S, DVB-S2, DVB-T, DVB-T2, DVB-C, DVB-C2, ATSC and ISDB-T cards. oscam oscam is an Open Source Conditional Access Module software used for descrambling DVB transmissions using smart cards. It's both a server and a client. synclounge synclounge is a third party tool that allows you to watch Plex in sync with your friends/family, wherever you are. tautulli tautulli is a python based web application for monitoring, analytics and notifications for Plex Media Server. tvheadend tvheadend works as a proxy server: is a TV streaming server and recorder for Linux, FreeBSD and Android supporting DVB-S, DVB-S2, DVB-C, DVB-T, ATSC, ISDB-T, IPTV, SAT>IP and HDHomeRun as input sources. webgrabplus webgrabplus is a multi-site incremental xmltv epg grabber. It collects tv-program guide data from selected tvguide sites for your favourite channels.","title":"Media Tools"},{"location":"general/awesome-lsio/#monitor","text":"Container Description apprise-api apprise-api Takes advantage of Apprise through your network with a user-friendly API. healthchecks healthchecks is a watchdog for your cron jobs. It's a web server that listens for pings from your cron jobs, plus a web interface. librespeed librespeed is a very lightweight Speedtest implemented in Javascript, using XMLHttpRequest and Web Workers. smokeping smokeping keeps track of your network latency. For a full example of what this application is capable of visit UCDavis . syslog-ng syslog-ng allows you to flexibly collect, parse, classify, rewrite and correlate logs from across your infrastructure and store or route them to log analysis tools.","title":"Monitor"},{"location":"general/awesome-lsio/#music","text":"Container Description airsonic-advanced airsonic-advanced is a free, web-based media streamer, providing ubiquitious access to your music. Use it to share your music with friends, or to listen to your own music while at work. You can stream to multiple players simultaneously, for instance to one player in your kitchen and another in your living room. audacity Audacity is an easy-to-use, multi-track audio editor and recorder. Developed by a group of volunteers as open source. beets beets is a music library manager and not, for the most part, a music player. It does include a simple player plugin and an experimental Web-based player, but it generally leaves actual sound-reproduction to specialized tools. daapd daapd (iTunes) media server with support for AirPlay devices, Apple Remote (and compatibles), Chromecast, MPD and internet radio. headphones headphones is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, \u00b5Torrent and Blackhole. lidarr lidarr is a music collection manager for Usenet and BitTorrent users. It can monitor multiple RSS feeds for new tracks from your favorite artists and will grab, sort and rename them. It can also be configured to automatically upgrade the quality of files already downloaded when a better quality format becomes available. mstream mstream is a personal music streaming server. You can use mStream to stream your music from your home computer to any device, anywhere. There are mobile apps available for both Android and iPhone.","title":"Music"},{"location":"general/awesome-lsio/#network","text":"Container Description unifi-controller The unifi-controller software is a powerful, enterprise wireless software engine ideal for high-density client deployments requiring low latency and high uptime performance. wireshark Wireshark is the world\u2019s foremost and widely-used network protocol analyzer. It lets you see what\u2019s happening on your network at a microscopic level and is the de facto (and often de jure) standard across many commercial and non-profit enterprises, government agencies, and educational institutions. Wireshark development thrives thanks to the volunteer contributions of networking experts around the globe and is the continuation of a project started by Gerald Combs in 1998.","title":"Network"},{"location":"general/awesome-lsio/#photos","text":"Container Description chevereto chevereto is an image hosting software that allows you to create a beautiful and full-featured image hosting website on your own server. It's your hosting and your rules, so say goodbye to closures and restrictions. darktable darktable is an open source photography workflow application and raw developer. A virtual lighttable and darkroom for photographers. It manages your digital negatives in a database, lets you view them through a zoomable lighttable and enables you to develop raw images and enhance them. digikam digiKam : Professional Photo Management with the Power of Open Source lychee lychee is a free photo-management tool, which runs on your server or web-space. Installing is a matter of seconds. Upload, manage and share photos like from a native application. Lychee comes with everything you need and all your photos are stored securely.\" photoshow photoshow is gallery software at its easiest, it doesn't even require a database. piwigo piwigo is a photo gallery software for the web that comes with powerful features to publish and manage your collection of pictures. pixapop pixapop is an open-source single page application to view your photos in the easiest way possible.","title":"Photos"},{"location":"general/awesome-lsio/#programming","text":"Container Description cloud9 cloud9 Cloud9 is a complete web based IDE with terminal access. This container is for running their core SDK locally and developing plugins. code-server code-server is VS Code running on a remote server, accessible through the browser. openvscode-server openvscode-server provides a version of VS Code that runs a server on a remote machine and allows access through a modern web browser. pylon pylon is a web based integrated development environment built with Node.js as a backend and with a supercharged JavaScript/HTML5 frontend, licensed under GPL version 3. This project originates from Cloud9 v2 project.","title":"Programming"},{"location":"general/awesome-lsio/#rss","text":"Container Description freshrss freshrss is a free, self-hostable aggregator for rss feeds.","title":"RSS"},{"location":"general/awesome-lsio/#recipes","text":"Container Description grocy grocy is an ERP system for your kitchen! Cut down on food waste, and manage your chores with this brilliant utility.","title":"Recipes"},{"location":"general/awesome-lsio/#remote","text":"Container Description guacd guacd - Apache Guacamole is a clientless remote desktop gateway. It supports standard protocols like VNC, RDP, and SSH. This container is only the backend server component needed to use The official or 3rd party HTML5 frontends. rdesktop rdesktop - Containers containing full desktop environments in many popular flavors for Alpine, Ubuntu, Arch, and Fedora accessible via RDP. remmina Remmina is a remote desktop client written in GTK, aiming to be useful for system administrators and travellers, who need to work with lots of remote computers in front of either large or tiny screens. Remmina supports multiple network protocols, in an integrated and consistent user interface. Currently RDP, VNC, SPICE, NX, XDMCP, SSH and EXEC are supported. webtop webtop - Alpine, Ubuntu, Fedora, and Arch based containers containing full desktop environments in officially supported flavors accessible via any modern web browser.","title":"Remote"},{"location":"general/awesome-lsio/#science","text":"Container Description boinc BOINC is a platform for high-throughput computing on a large scale (thousands or millions of computers). It can be used for volunteer computing (using consumer devices) or grid computing (using organizational resources). It supports virtualized, parallel, and GPU-based applications. foldingathome Folding@home is a distributed computing project for simulating protein dynamics, including the process of protein folding and the movements of proteins implicated in a variety of diseases. It brings together citizen scientists who volunteer to run simulations of protein dynamics on their personal computers. Insights from this data are helping scientists to better understand biology, and providing new opportunities for developing therapeutics.","title":"Science"},{"location":"general/awesome-lsio/#storage","text":"Container Description diskover diskover is an open source file system indexer that uses Elasticsearch to index and manage data across heterogeneous storage systems. qdirstat QDirStat Qt-based directory statistics: KDirStat without any KDE -- from the author of the original KDirStat. scrutiny scrutiny WebUI for smartd S.M.A.R.T monitoring. Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates from Backblaze.","title":"Storage"},{"location":"general/awesome-lsio/#tools","text":"Container Description yq No description","title":"Tools"},{"location":"general/awesome-lsio/#vpn","text":"Container Description wireguard WireGuard\u00ae is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. It aims to be faster, simpler, leaner, and more useful than IPsec, while avoiding the massive headache. It intends to be considerably more performant than OpenVPN. WireGuard is designed as a general purpose VPN for running on embedded interfaces and super computers alike, fit for many different circumstances. Initially released for the Linux kernel, it is now cross-platform (Windows, macOS, BSD, iOS, Android) and widely deployable. It is currently under heavy development, but already it might be regarded as the most secure, easiest to use, and simplest VPN solution in the industry.","title":"VPN"},{"location":"general/awesome-lsio/#web","text":"Container Description firefox Firefox Browser, also known as Mozilla Firefox or simply Firefox, is a free and open-source web browser developed by the Mozilla Foundation and its subsidiary, the Mozilla Corporation. Firefox uses the Gecko layout engine to render web pages, which implements current and anticipated web standards. grav grav is a Fast, Simple, and Flexible, file-based Web-platform. nginx nginx is a simple webserver with php support. The config files reside in /config for easy user customization. swag SWAG - Secure Web Application Gateway (formerly known as letsencrypt, no relation to Let's Encrypt\u2122) sets up an Nginx webserver and reverse proxy with php support and a built-in certbot client that automates free SSL server certificate generation and renewal processes (Let's Encrypt and ZeroSSL). It also contains fail2ban for intrusion prevention.","title":"Web"},{"location":"general/awesome-lsio/#wiki","text":"Container Description bookstack bookstack is a free and open source Wiki designed for creating beautiful documentation. Featuring a simple, but powerful WYSIWYG editor it allows for teams to create detailed and useful documentation with ease. dillinger dillinger is a cloud-enabled, mobile-ready, offline-storage, AngularJS powered HTML5 Markdown editor. dokuwiki dokuwiki is a simple to use and highly versatile Open Source wiki software that doesn't require a database. It is loved by users for its clean and readable syntax. The ease of maintenance, backup and integration makes it an administrator's favorite. Built in access controls and authentication connectors make DokuWiki especially useful in the enterprise context and the large number of plugins contributed by its vibrant community allow for a broad range of use cases beyond a traditional wiki. hedgedoc HedgeDoc gives you access to all your files wherever you are. raneto raneto - is an open source Knowledgebase platform that uses static Markdown files to power your Knowledgebase. wikijs wikijs A modern, lightweight and powerful wiki app built on NodeJS.","title":"Wiki"},{"location":"general/container-customization/","text":"Customizing LinuxServer Containers One of the challenges we face as an organization is making everyone happy with the functionality we provide for the software we package in Docker containers. As the projects that we package and distribute grow, conventionally so do the use cases along with large communities of power users. As it has become very difficult for us to support Swiss Army Knife style images we are looking to the community of users to start customizing our base image layer themselves. Something we provide and pride ourselves on is keeping our containers up to date with not only the latest external software releases, but also with the latest distribution level packages. Conventionally when people needed some form of custom functionality they would fork our source and build something once that suited their needs leaving this dangling fork without updates or basic maintenance. Behind the scenes we have been working to provide the community with the ability to customize our images not only for themselves but also for other users. This comes in the form of 3 different tools: Private Custom Scripts Private Custom Services Public Facing Docker Mods All of the functionality described in this post is live on every one of the containers we currently maintain: https://fleet.linuxserver.io NOTE: While the following support has been added to our containers, we will not give support to any custom scripts, services, or mods. If you are having an issue with one of our containers, be sure to disable all custom scripts/services/mods before seeking support. Custom Scripts The first part of this update is the support for a user's custom scripts to run at startup. In every container, simply create a new folder located at /custom-cont-init.d and add any scripts you want. These scripts can contain logic for installing packages, copying over custom files to other locations, or installing plugins. Because this location is outside of /config you will need to mount it like any other volume if you wish to make use of it. e.g. -v /home/foo/appdata/my-custom-files:/custom-cont-init.d if using the Docker CLI or services: bar: volumes: - /home/foo/appdata/bar:/config - /home/foo/appdata/my-custom-files:/custom-cont-init.d:ro if using compose. Where possible, to improve security, we recommend mounting them read-only ( :ro ) so that container processes cannot write to the location. One example use case is our Piwigo container has a plugin that supports video, but requires ffmpeg to be installed. No problem. Add this bad boy into a script file (can be named anything) and you're good to go. #!/bin/bash echo \"**** installing ffmpeg ****\" apk add --no-cache ffmpeg NOTE: The folder /custom-cont-init.d needs to be owned by root! If this is not the case, this folder will be renamed and a new (empty) folder will be created. This is to prevent remote code execution by putting scripts in the aforementioned folder. Custom Services There might also be a need to run an additional service in a container alongside what we already package. Similarly to the custom scripts, just create a new directory at /custom-services.d . The files in this directory should be named after the service they will be running. Similar to with custom scripts you will need to mount this folder like any other volume if you wish to make use of it. e.g. -v /home/foo/appdata/my-custom-services:/custom-services.d if using the Docker CLI or services: bar: volumes: - /home/foo/appdata/bar:/config - /home/foo/appdata/my-custom-services:/custom-services.d:ro if using compose. Where possible, to improve security, we recommend mounting them read-only ( :ro ) so that container processes cannot write to the location. Running cron in our containers is now as simple as a single file. Drop this script in /custom-services.d/cron and it will run automatically in the container: #!/usr/bin/with-contenv bash /usr/sbin/crond -f -S -l 0 -c /etc/crontabs NOTE: With this example, you will most likely need to have cron installed via a custom script using the technique in the previous section, and will need to populate the crontab. NOTE: The folder /custom-services.d needs to be owned by root! If this is not the case, this folder will be renamed and a new (empty) folder will be created. This is to prevent remote code execution by putting scripts in the aforementioned folder. Docker Mods In most cases if you needed to write some kind of custom logic to get a plugin to work or to use some kind of popular external service you will not be the only one that finds this logic useful. If you would like to publish and support your hard work we provide a system for a user to pass a single environment variable to the container to ingest your custom modifications. We consume Mods from Dockerhub and in order to publish one following our guide, you only need a Github Account and a Dockerhub account. (Our guide and example code can be found here) Essentially it is a system that stashes a tarball of scripts and any other files you need in an image layer on Dockerhub. When we spin up the container we will download this tarball and extract it to /. This allows community members to publish a relatively static pile of logic that will always be applied to an end user's up to date Linuxserver.io container. An example of how this logic can be used to greatly expand the functionality of our base containers would be to add VPN support to a Transmission container: docker create \\ --name=transmission \\ --cap-add=NET_ADMIN \\ -e PUID=1000 \\ -e PGID=1000 \\ -e DOCKER_MODS=taisun/config-mods:pia \\ -e PIAUSER=pmyuser \\ -e PIAPASS=mypassword \\ -e PIAENDPOINT=\"US New York City\" \\ -e TZ=US/Eastern \\ -p 9091:9091 \\ -p 51413:51413 \\ -p 51413:51413/udp \\ -v path to data:/config \\ -v path to downloads:/downloads \\ -v path to watch folder:/watch \\ --restart unless-stopped \\ linuxserver/transmission The source code for this mod can be found here . NOTE: When pulling in logic from external sources practice caution and trust the sources/community you get them from, as there are extreme security implications to consuming files from sources outside of our control. We are here to help If you are interested in writing custom logic and possibly sharing it with the community in the form of a Docker Mod we are always available to help you out. Our Discord server is best for quick direct contact and our Forum for a longer running project. There is zero barrier to entry for these levels of container customization and you are in complete control. We are looking forward to your next creation.","title":"Customizing LinuxServer Containers"},{"location":"general/container-customization/#customizing-linuxserver-containers","text":"One of the challenges we face as an organization is making everyone happy with the functionality we provide for the software we package in Docker containers. As the projects that we package and distribute grow, conventionally so do the use cases along with large communities of power users. As it has become very difficult for us to support Swiss Army Knife style images we are looking to the community of users to start customizing our base image layer themselves. Something we provide and pride ourselves on is keeping our containers up to date with not only the latest external software releases, but also with the latest distribution level packages. Conventionally when people needed some form of custom functionality they would fork our source and build something once that suited their needs leaving this dangling fork without updates or basic maintenance. Behind the scenes we have been working to provide the community with the ability to customize our images not only for themselves but also for other users. This comes in the form of 3 different tools: Private Custom Scripts Private Custom Services Public Facing Docker Mods All of the functionality described in this post is live on every one of the containers we currently maintain: https://fleet.linuxserver.io NOTE: While the following support has been added to our containers, we will not give support to any custom scripts, services, or mods. If you are having an issue with one of our containers, be sure to disable all custom scripts/services/mods before seeking support.","title":"Customizing LinuxServer Containers"},{"location":"general/container-customization/#custom-scripts","text":"The first part of this update is the support for a user's custom scripts to run at startup. In every container, simply create a new folder located at /custom-cont-init.d and add any scripts you want. These scripts can contain logic for installing packages, copying over custom files to other locations, or installing plugins. Because this location is outside of /config you will need to mount it like any other volume if you wish to make use of it. e.g. -v /home/foo/appdata/my-custom-files:/custom-cont-init.d if using the Docker CLI or services: bar: volumes: - /home/foo/appdata/bar:/config - /home/foo/appdata/my-custom-files:/custom-cont-init.d:ro if using compose. Where possible, to improve security, we recommend mounting them read-only ( :ro ) so that container processes cannot write to the location. One example use case is our Piwigo container has a plugin that supports video, but requires ffmpeg to be installed. No problem. Add this bad boy into a script file (can be named anything) and you're good to go. #!/bin/bash echo \"**** installing ffmpeg ****\" apk add --no-cache ffmpeg NOTE: The folder /custom-cont-init.d needs to be owned by root! If this is not the case, this folder will be renamed and a new (empty) folder will be created. This is to prevent remote code execution by putting scripts in the aforementioned folder.","title":"Custom Scripts"},{"location":"general/container-customization/#custom-services","text":"There might also be a need to run an additional service in a container alongside what we already package. Similarly to the custom scripts, just create a new directory at /custom-services.d . The files in this directory should be named after the service they will be running. Similar to with custom scripts you will need to mount this folder like any other volume if you wish to make use of it. e.g. -v /home/foo/appdata/my-custom-services:/custom-services.d if using the Docker CLI or services: bar: volumes: - /home/foo/appdata/bar:/config - /home/foo/appdata/my-custom-services:/custom-services.d:ro if using compose. Where possible, to improve security, we recommend mounting them read-only ( :ro ) so that container processes cannot write to the location. Running cron in our containers is now as simple as a single file. Drop this script in /custom-services.d/cron and it will run automatically in the container: #!/usr/bin/with-contenv bash /usr/sbin/crond -f -S -l 0 -c /etc/crontabs NOTE: With this example, you will most likely need to have cron installed via a custom script using the technique in the previous section, and will need to populate the crontab. NOTE: The folder /custom-services.d needs to be owned by root! If this is not the case, this folder will be renamed and a new (empty) folder will be created. This is to prevent remote code execution by putting scripts in the aforementioned folder.","title":"Custom Services"},{"location":"general/container-customization/#docker-mods","text":"In most cases if you needed to write some kind of custom logic to get a plugin to work or to use some kind of popular external service you will not be the only one that finds this logic useful. If you would like to publish and support your hard work we provide a system for a user to pass a single environment variable to the container to ingest your custom modifications. We consume Mods from Dockerhub and in order to publish one following our guide, you only need a Github Account and a Dockerhub account. (Our guide and example code can be found here) Essentially it is a system that stashes a tarball of scripts and any other files you need in an image layer on Dockerhub. When we spin up the container we will download this tarball and extract it to /. This allows community members to publish a relatively static pile of logic that will always be applied to an end user's up to date Linuxserver.io container. An example of how this logic can be used to greatly expand the functionality of our base containers would be to add VPN support to a Transmission container: docker create \\ --name=transmission \\ --cap-add=NET_ADMIN \\ -e PUID=1000 \\ -e PGID=1000 \\ -e DOCKER_MODS=taisun/config-mods:pia \\ -e PIAUSER=pmyuser \\ -e PIAPASS=mypassword \\ -e PIAENDPOINT=\"US New York City\" \\ -e TZ=US/Eastern \\ -p 9091:9091 \\ -p 51413:51413 \\ -p 51413:51413/udp \\ -v path to data:/config \\ -v path to downloads:/downloads \\ -v path to watch folder:/watch \\ --restart unless-stopped \\ linuxserver/transmission The source code for this mod can be found here . NOTE: When pulling in logic from external sources practice caution and trust the sources/community you get them from, as there are extreme security implications to consuming files from sources outside of our control.","title":"Docker Mods"},{"location":"general/container-customization/#we-are-here-to-help","text":"If you are interested in writing custom logic and possibly sharing it with the community in the form of a Docker Mod we are always available to help you out. Our Discord server is best for quick direct contact and our Forum for a longer running project. There is zero barrier to entry for these levels of container customization and you are in complete control. We are looking forward to your next creation.","title":"We are here to help"},{"location":"general/container-execution/","text":"Container Execution You may find at some point you need to view the internal data of a container. Shell Access Particularly useful when debugging the application - to shell in to one of our containers, run the following: docker exec -it /bin/bash Tailing the logs The vast majority of our images are configured to output the application logs to the console, which in Docker's terms means you can access them using the docker logs command: docker logs -f --tail= The --tail argument is optional, but useful if the application has been running for a long time - the logs command by default will output all logs. To make life simpler for yourself here's a handy bash alias to do some of the leg work for you: # ~/.bash_aliases alias dtail='docker logs -tf --tail=\"50\" \"$@\"' Execute it with dtail . Checking the build version If you are experiencing issues with one of our containers, it helps us to know which version of the image your container is running from. The primary reason we ask for this is because you may be reporting an issue we are aware of and have subsequently fixed. However, if you are running on the latest version of our image, it could indeed be a newly found bug, which we'd want to know more about. To obtain the build version for the container: docker inspect -f '{{ index .Config.Labels \"build_version\" }}' Or the image: docker inspect -f '{{ index .Config.Labels \"build_version\" }}' linuxserver/","title":"Container Execution"},{"location":"general/container-execution/#container-execution","text":"You may find at some point you need to view the internal data of a container.","title":"Container Execution"},{"location":"general/container-execution/#shell-access","text":"Particularly useful when debugging the application - to shell in to one of our containers, run the following: docker exec -it /bin/bash","title":"Shell Access"},{"location":"general/container-execution/#tailing-the-logs","text":"The vast majority of our images are configured to output the application logs to the console, which in Docker's terms means you can access them using the docker logs command: docker logs -f --tail= The --tail argument is optional, but useful if the application has been running for a long time - the logs command by default will output all logs. To make life simpler for yourself here's a handy bash alias to do some of the leg work for you: # ~/.bash_aliases alias dtail='docker logs -tf --tail=\"50\" \"$@\"' Execute it with dtail .","title":"Tailing the logs"},{"location":"general/container-execution/#checking-the-build-version","text":"If you are experiencing issues with one of our containers, it helps us to know which version of the image your container is running from. The primary reason we ask for this is because you may be reporting an issue we are aware of and have subsequently fixed. However, if you are running on the latest version of our image, it could indeed be a newly found bug, which we'd want to know more about. To obtain the build version for the container: docker inspect -f '{{ index .Config.Labels \"build_version\" }}' Or the image: docker inspect -f '{{ index .Config.Labels \"build_version\" }}' linuxserver/","title":"Checking the build version"},{"location":"general/containers-101/","text":"Docker Containers: 101 A container bundles all the libraries required by an application to run, you no longer need to know which version of Java, Apache or whatever \u2013 the person who built the container for you took care of that. Containers don\u2019t usually ship with configuration files baked in though. This is because the contents of a container are \u2018stateless\u2019 or \u2018immutable\u2019. In English, this means the state or filesystem of the container itself cannot be modified after it is created. What do I need to know? To get started, not much. You will need to know about some of the terminology or concepts when performing more advanced tasks or troubleshooting but getting started couldn't be much simpler. docker run hello-world That's it, your first docker container. It pre-supposes you have docker installed but that's all it takes to run a container. You didn't need to know anything about installed what that app needed to run - this is the key benefit. hello-world is a simple example but imagine you have a complex application with a large number of dependencies and it is tied to a specific version of Python or Java. Then imagine you have a second app again tied to a specific, but different, version of Java or Python. Now you have to try and ensure these two (often conflicting) versions sit on the same host and play nice. In the world of containers these two versions can operate in complete isolation from one another. Bliss. Key Terminology There are a few terms you might find useful to understand when working with containers: docker - the first, and most popular, container runtime - it sits as an abstraction layer between the kernels features such as cgroups or namespaces and running applications container - a sandboxed process isolated in memory and running instance of an image image - a pre-built filesystem in a format ready to be understood by a container runtime (usually docker) volume - use volumes to persist data outside of the containers sandboxed filesystem environment - a way of configuring the sandboxed environment your container runs in Key Concepts Containers are completely sandboxed environments by the Linux kernel. It may help you to think of them somewhat like a small VM however in practice this is largely false. The Linux kernel controls access to various system resources utilising control groups (cgroups). We rely on docker to translate these complex concepts into simple ones that users can understand and consume. By default a running container has absolutely no context of the world around it. Out the box you cannot connect from the outside world to the running webservers on ports 80 and 443 below. To allow entry to the sandbox from the outside world we must explicitly allow entry using the -p flag. docker run -d --name=letsencrypt -p 80:80 -p 443:443 linuxserver/letsencrypt Take this concept and multiply it across all aspects of a running application. Ports, volumes (i.e. the files you want to be available inside the container from outside the container), environment variables and so on. For us as developers this allows us to isolate your system from troubleshooting as the box the container is running in (the container) is identical to the next. Containers are an amazing way to run applications in a secure, sandboxed way.","title":"Docker Containers: 101"},{"location":"general/containers-101/#docker-containers-101","text":"A container bundles all the libraries required by an application to run, you no longer need to know which version of Java, Apache or whatever \u2013 the person who built the container for you took care of that. Containers don\u2019t usually ship with configuration files baked in though. This is because the contents of a container are \u2018stateless\u2019 or \u2018immutable\u2019. In English, this means the state or filesystem of the container itself cannot be modified after it is created.","title":"Docker Containers: 101"},{"location":"general/containers-101/#what-do-i-need-to-know","text":"To get started, not much. You will need to know about some of the terminology or concepts when performing more advanced tasks or troubleshooting but getting started couldn't be much simpler. docker run hello-world That's it, your first docker container. It pre-supposes you have docker installed but that's all it takes to run a container. You didn't need to know anything about installed what that app needed to run - this is the key benefit. hello-world is a simple example but imagine you have a complex application with a large number of dependencies and it is tied to a specific version of Python or Java. Then imagine you have a second app again tied to a specific, but different, version of Java or Python. Now you have to try and ensure these two (often conflicting) versions sit on the same host and play nice. In the world of containers these two versions can operate in complete isolation from one another. Bliss.","title":"What do I need to know?"},{"location":"general/containers-101/#key-terminology","text":"There are a few terms you might find useful to understand when working with containers: docker - the first, and most popular, container runtime - it sits as an abstraction layer between the kernels features such as cgroups or namespaces and running applications container - a sandboxed process isolated in memory and running instance of an image image - a pre-built filesystem in a format ready to be understood by a container runtime (usually docker) volume - use volumes to persist data outside of the containers sandboxed filesystem environment - a way of configuring the sandboxed environment your container runs in","title":"Key Terminology"},{"location":"general/containers-101/#key-concepts","text":"Containers are completely sandboxed environments by the Linux kernel. It may help you to think of them somewhat like a small VM however in practice this is largely false. The Linux kernel controls access to various system resources utilising control groups (cgroups). We rely on docker to translate these complex concepts into simple ones that users can understand and consume. By default a running container has absolutely no context of the world around it. Out the box you cannot connect from the outside world to the running webservers on ports 80 and 443 below. To allow entry to the sandbox from the outside world we must explicitly allow entry using the -p flag. docker run -d --name=letsencrypt -p 80:80 -p 443:443 linuxserver/letsencrypt Take this concept and multiply it across all aspects of a running application. Ports, volumes (i.e. the files you want to be available inside the container from outside the container), environment variables and so on. For us as developers this allows us to isolate your system from troubleshooting as the box the container is running in (the container) is identical to the next. Containers are an amazing way to run applications in a secure, sandboxed way.","title":"Key Concepts"},{"location":"general/docker-compose/","text":"Docker Compose Intro Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application\u2019s services. Then, with a single command, you create and start all the services from your configuration. Note that when inputting data for variables, you must follow standard YAML rules. In the case of passwords with special characters this can mean escaping them properly ($ is the escape character) or properly quoting the variable. The best course of action if you do not know how to do this or are unwilling to research, is to stick to alphanumeric characters only. Installation Install Option 1 (recommended): Starting with version 2, Docker started publishing docker compose as a go based plugin for docker (rather than a python based standalone binary). And they also publish this plugin for various arches, including x86_64, armhf and aarch64 (as opposed to the x86_64 only binaries for v1.X). Therefore we updated our recommended install option to utilize the plugin. You can install docker compose via the following commands: ARCH=$(uname -m) && [[ \"${ARCH}\" == \"armv7l\" ]] && ARCH=\"armv7\" sudo mkdir -p /usr/local/lib/docker/cli-plugins sudo curl -SL \"https://github.com/docker/compose/releases/latest/download/docker-compose-linux-${ARCH}\" -o /usr/local/lib/docker/cli-plugins/docker-compose sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose Assuming you already have docker (or at the very least docker-cli) installed, preferably from the official docker repos, running docker compose version should display the compose version. If you don't have docker installed yet, we recommend installing it via the following commands: curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh v1.X compatibility: As v2 runs as a plugin instead of a standalone binary, it is invoked by docker compose args instead of docker-compose args . There are also some slight differences in how the yaml is operated as well. To make migration easier, Docker released a replacement binary for docker-compose on x86_64 and aarch64 platforms. More info on that can be found at the upstream repo . Install Option 2: You can install docker-compose using our docker-compose image via a run script. You can simply run the following commands on your system and you should have a functional install that you can call from anywhere as docker-compose : sudo curl -L --fail https://raw.githubusercontent.com/linuxserver/docker-docker-compose/v2/run.sh -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose In order to update the local image, you can run the following commands: docker pull linuxserver/docker-compose:\"${DOCKER_COMPOSE_IMAGE_TAG:-v2}\" docker image prune -f The above commands will use the v2 images (although invoked by docker-compose instead of docker compose ). If you'd like to use v1 images, you can set an env var DOCKER_COMPOSE_IMAGE_TAG=alpine , DOCKER_COMPOSE_IMAGE_TAG=ubuntu in your respective .profile . Alternatively you can set that var to a versioned image tag like v2-2.4.1-r1 or version-alpine-1.27.4 to pin it to a specific docker-compose version. Single service Usage Here's a basic example for deploying a Linuxserver container with docker compose: version: \"2.1\" services: heimdall: image: linuxserver/heimdall container_name: heimdall volumes: - /home/user/appdata/heimdall:/config environment: - PUID=1000 - PGID=1000 - TZ=Europe/London ports: - 80:80 - 443:443 restart: unless-stopped If you save the above snippet in a file named docker-compose.yml , you can simply run docker compose up -d from within the same folder and the heimdall image will be automatically pulled, and a container will be created and started. up means bring the services up, and -d means do it in the background. If you want to do it from a different folder or if you named the yaml file differently, ie. heimdall.yml , then you can define it in the command with -f : docker compose -f /path/to/heimdall.yml up -d To bring down the services, simply do docker compose down or docker compose -f /path/to/heimdall.yml down and all containers defined by the yml will be stopped and destroyed. Multiple Service Usage You can have multiple services managed by a single compose yaml. Copy the contents below the services: line in any of our readme yaml samples into the same yaml file and the docker compose up/down commands will apply to all services at once. Let's say you have the following in a yaml file named docker-compose.yml : version: \"2.1\" services: heimdall: image: linuxserver/heimdall container_name: heimdall volumes: - /home/user/appdata/heimdall:/config environment: - PUID=1000 - PGID=1000 - TZ=Europe/London ports: - 80:80 - 443:443 restart: unless-stopped nginx: image: linuxserver/nginx container_name: nginx environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /home/user/appdata/nginx:/config ports: - 81:80 - 444:443 restart: unless-stopped mariadb: image: linuxserver/mariadb container_name: mariadb environment: - PUID=1000 - PGID=1000 - MYSQL_ROOT_PASSWORD=ROOT_ACCESS_PASSWORD - TZ=Europe/London volumes: - /home/user/appdata/mariadb:/config ports: - 3306:3306 restart: unless-stopped You now have 3 services defined in there: heimdall , nginx and mariadb . When you do a docker compose up -d , it will first download the images for all three if they don't exist (if they exist, they are not updated) and it will create all three containers and start them. docker compose down will bring all three services down and destroy the containers (persistent data will remain). Updates If you want to update the images and recreate the containers with the same vars, it's extremely easy with docker-compose. First we tell it to update all images via docker compose pull . Then we issue docker compose up -d and it will automatically recreate the containers (as necessary) based on the updated images. If a container's image is already the latest and there was no update, it remains untouched. Similarly, if you edit the contents of the yaml file and re-issue docker compose up -d , only the containers affected by the changes to the yaml file will be recreated, others will be untouched. Defining the containers running on your server as code is a core tenet of a \"Devops\" approach to the world. Constructing elaborate docker run commands and then forgetting which variables you passed is a thing of the past when using docker compose . Support Requests If you would like to request support, you can do so on our forum or on our discord server . When you do so, please provide all the necessary information like the server and platform info, docker container log and the compose yaml. If your compose yaml makes use of .env , please post an output of docker compose convert or docker compose convert -f /path/to/compose.yml for the entire yaml, or docker compose convert for a single service, as it will automatically replace the environment variables with their actual values. Tips & Tricks docker compose expects a docker-compose.yml file in the current directory and if one isn't present it will complain. In order to improve your quality of life we suggest the use of bash aliases. The file path for the aliases below assumes that the docker-compose.yml file is being kept in the folder /opt . If your compose file is kept somewhere else, like in a home directory, then the path will need to be changed. Create or open the file ~/.bash_aliases and populate with the following content: alias dcup='docker compose -f /opt/docker-compose.yml up -d' #brings up all containers if one is not defined after dcup alias dcdown='docker compose -f /opt/docker-compose.yml stop' #brings down all containers if one is not defined after dcdown alias dcpull='docker compose -f /opt/docker-compose.yml pull' #pulls all new images is specified after dcpull alias dclogs='docker compose -f /opt/docker-compose.yml logs -tf --tail=\"50\" ' alias dtail='docker logs -tf --tail=\"50\" \"$@\"' If the docker-compose.yml file is in a home directory, the following can be put in the ~/.bash_aliases file. alias dcup='docker-compose -f ~/docker-compose.yml up -d' #brings up all containers if one is not defined after dcup alias dcdown='docker-compose -f ~/docker-compose.yml stop' #brings down all containers if one is not defined after dcdown alias dcpull='docker-compose -f ~/docker-compose.yml pull' #pulls all new images unless one is specified alias dclogs='docker-compose -f ~/docker-compose.yml logs -tf --tail=\"50\" ' alias dtail='docker logs -tf --tail=\"50\" \"$@\"' There are multiple ways to see the logs of your containers. In some instances, using docker logs is preferable to docker compose logs . By default docker logs will not run unless you define which service the logs are coming from. The docker compose logs will pull all of the logs for the services defined in the docker-compose.yml file. When asking for help, you should post your logs or be ready to provide logs if someone requests it. If you are running multiple containers in your docker-compose.yml file, it is not helpful to submit all of the logs. If you are experiencing issues with a single service, say Heimdall, then you would want to get your logs using docker logs heimdall or docker compose logs heimdall . The bash_alias for dclogs can be used if you define your service after you've typed the alias. Likewise, the bash_alias detail will not run without defining the service after it. Some distributions, like Ubuntu, already have the code snippet below in the ~/.bashrc file. If it is not included, you'll need to add the following to your ~/.bashrc file in order for the aliases file to be picked up: if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi Once configured, you can run source ~/.bashrc or log out and the log in again. Now you can type dcpull or dcup to manage your entire fleet of containers at once. It's like magic.","title":"Docker Compose"},{"location":"general/docker-compose/#docker-compose","text":"","title":"Docker Compose"},{"location":"general/docker-compose/#intro","text":"Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application\u2019s services. Then, with a single command, you create and start all the services from your configuration. Note that when inputting data for variables, you must follow standard YAML rules. In the case of passwords with special characters this can mean escaping them properly ($ is the escape character) or properly quoting the variable. The best course of action if you do not know how to do this or are unwilling to research, is to stick to alphanumeric characters only.","title":"Intro"},{"location":"general/docker-compose/#installation","text":"","title":"Installation"},{"location":"general/docker-compose/#install-option-1-recommended","text":"Starting with version 2, Docker started publishing docker compose as a go based plugin for docker (rather than a python based standalone binary). And they also publish this plugin for various arches, including x86_64, armhf and aarch64 (as opposed to the x86_64 only binaries for v1.X). Therefore we updated our recommended install option to utilize the plugin. You can install docker compose via the following commands: ARCH=$(uname -m) && [[ \"${ARCH}\" == \"armv7l\" ]] && ARCH=\"armv7\" sudo mkdir -p /usr/local/lib/docker/cli-plugins sudo curl -SL \"https://github.com/docker/compose/releases/latest/download/docker-compose-linux-${ARCH}\" -o /usr/local/lib/docker/cli-plugins/docker-compose sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose Assuming you already have docker (or at the very least docker-cli) installed, preferably from the official docker repos, running docker compose version should display the compose version. If you don't have docker installed yet, we recommend installing it via the following commands: curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh","title":"Install Option 1 (recommended):"},{"location":"general/docker-compose/#v1x-compatibility","text":"As v2 runs as a plugin instead of a standalone binary, it is invoked by docker compose args instead of docker-compose args . There are also some slight differences in how the yaml is operated as well. To make migration easier, Docker released a replacement binary for docker-compose on x86_64 and aarch64 platforms. More info on that can be found at the upstream repo .","title":"v1.X compatibility:"},{"location":"general/docker-compose/#install-option-2","text":"You can install docker-compose using our docker-compose image via a run script. You can simply run the following commands on your system and you should have a functional install that you can call from anywhere as docker-compose : sudo curl -L --fail https://raw.githubusercontent.com/linuxserver/docker-docker-compose/v2/run.sh -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose In order to update the local image, you can run the following commands: docker pull linuxserver/docker-compose:\"${DOCKER_COMPOSE_IMAGE_TAG:-v2}\" docker image prune -f The above commands will use the v2 images (although invoked by docker-compose instead of docker compose ). If you'd like to use v1 images, you can set an env var DOCKER_COMPOSE_IMAGE_TAG=alpine , DOCKER_COMPOSE_IMAGE_TAG=ubuntu in your respective .profile . Alternatively you can set that var to a versioned image tag like v2-2.4.1-r1 or version-alpine-1.27.4 to pin it to a specific docker-compose version.","title":"Install Option 2:"},{"location":"general/docker-compose/#single-service-usage","text":"Here's a basic example for deploying a Linuxserver container with docker compose: version: \"2.1\" services: heimdall: image: linuxserver/heimdall container_name: heimdall volumes: - /home/user/appdata/heimdall:/config environment: - PUID=1000 - PGID=1000 - TZ=Europe/London ports: - 80:80 - 443:443 restart: unless-stopped If you save the above snippet in a file named docker-compose.yml , you can simply run docker compose up -d from within the same folder and the heimdall image will be automatically pulled, and a container will be created and started. up means bring the services up, and -d means do it in the background. If you want to do it from a different folder or if you named the yaml file differently, ie. heimdall.yml , then you can define it in the command with -f : docker compose -f /path/to/heimdall.yml up -d To bring down the services, simply do docker compose down or docker compose -f /path/to/heimdall.yml down and all containers defined by the yml will be stopped and destroyed.","title":"Single service Usage"},{"location":"general/docker-compose/#multiple-service-usage","text":"You can have multiple services managed by a single compose yaml. Copy the contents below the services: line in any of our readme yaml samples into the same yaml file and the docker compose up/down commands will apply to all services at once. Let's say you have the following in a yaml file named docker-compose.yml : version: \"2.1\" services: heimdall: image: linuxserver/heimdall container_name: heimdall volumes: - /home/user/appdata/heimdall:/config environment: - PUID=1000 - PGID=1000 - TZ=Europe/London ports: - 80:80 - 443:443 restart: unless-stopped nginx: image: linuxserver/nginx container_name: nginx environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /home/user/appdata/nginx:/config ports: - 81:80 - 444:443 restart: unless-stopped mariadb: image: linuxserver/mariadb container_name: mariadb environment: - PUID=1000 - PGID=1000 - MYSQL_ROOT_PASSWORD=ROOT_ACCESS_PASSWORD - TZ=Europe/London volumes: - /home/user/appdata/mariadb:/config ports: - 3306:3306 restart: unless-stopped You now have 3 services defined in there: heimdall , nginx and mariadb . When you do a docker compose up -d , it will first download the images for all three if they don't exist (if they exist, they are not updated) and it will create all three containers and start them. docker compose down will bring all three services down and destroy the containers (persistent data will remain).","title":"Multiple Service Usage"},{"location":"general/docker-compose/#updates","text":"If you want to update the images and recreate the containers with the same vars, it's extremely easy with docker-compose. First we tell it to update all images via docker compose pull . Then we issue docker compose up -d and it will automatically recreate the containers (as necessary) based on the updated images. If a container's image is already the latest and there was no update, it remains untouched. Similarly, if you edit the contents of the yaml file and re-issue docker compose up -d , only the containers affected by the changes to the yaml file will be recreated, others will be untouched. Defining the containers running on your server as code is a core tenet of a \"Devops\" approach to the world. Constructing elaborate docker run commands and then forgetting which variables you passed is a thing of the past when using docker compose .","title":"Updates"},{"location":"general/docker-compose/#support-requests","text":"If you would like to request support, you can do so on our forum or on our discord server . When you do so, please provide all the necessary information like the server and platform info, docker container log and the compose yaml. If your compose yaml makes use of .env , please post an output of docker compose convert or docker compose convert -f /path/to/compose.yml for the entire yaml, or docker compose convert for a single service, as it will automatically replace the environment variables with their actual values.","title":"Support Requests"},{"location":"general/docker-compose/#tips-tricks","text":"docker compose expects a docker-compose.yml file in the current directory and if one isn't present it will complain. In order to improve your quality of life we suggest the use of bash aliases. The file path for the aliases below assumes that the docker-compose.yml file is being kept in the folder /opt . If your compose file is kept somewhere else, like in a home directory, then the path will need to be changed. Create or open the file ~/.bash_aliases and populate with the following content: alias dcup='docker compose -f /opt/docker-compose.yml up -d' #brings up all containers if one is not defined after dcup alias dcdown='docker compose -f /opt/docker-compose.yml stop' #brings down all containers if one is not defined after dcdown alias dcpull='docker compose -f /opt/docker-compose.yml pull' #pulls all new images is specified after dcpull alias dclogs='docker compose -f /opt/docker-compose.yml logs -tf --tail=\"50\" ' alias dtail='docker logs -tf --tail=\"50\" \"$@\"' If the docker-compose.yml file is in a home directory, the following can be put in the ~/.bash_aliases file. alias dcup='docker-compose -f ~/docker-compose.yml up -d' #brings up all containers if one is not defined after dcup alias dcdown='docker-compose -f ~/docker-compose.yml stop' #brings down all containers if one is not defined after dcdown alias dcpull='docker-compose -f ~/docker-compose.yml pull' #pulls all new images unless one is specified alias dclogs='docker-compose -f ~/docker-compose.yml logs -tf --tail=\"50\" ' alias dtail='docker logs -tf --tail=\"50\" \"$@\"' There are multiple ways to see the logs of your containers. In some instances, using docker logs is preferable to docker compose logs . By default docker logs will not run unless you define which service the logs are coming from. The docker compose logs will pull all of the logs for the services defined in the docker-compose.yml file. When asking for help, you should post your logs or be ready to provide logs if someone requests it. If you are running multiple containers in your docker-compose.yml file, it is not helpful to submit all of the logs. If you are experiencing issues with a single service, say Heimdall, then you would want to get your logs using docker logs heimdall or docker compose logs heimdall . The bash_alias for dclogs can be used if you define your service after you've typed the alias. Likewise, the bash_alias detail will not run without defining the service after it. Some distributions, like Ubuntu, already have the code snippet below in the ~/.bashrc file. If it is not included, you'll need to add the following to your ~/.bashrc file in order for the aliases file to be picked up: if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi Once configured, you can run source ~/.bashrc or log out and the log in again. Now you can type dcpull or dcup to manage your entire fleet of containers at once. It's like magic.","title":"Tips & Tricks"},{"location":"general/fleet/","text":"Fleet How Fleet works Fleet stores a snapshot of Docker Images in its own database, consisting of metadata deemed most pertinent to both the users of the images, and the repository owner. It will synchronize with Docker Hub over a set interval in order to update its stored data. It then displays this snapshot data on its own status page as a useful list, containing links to each repository and image owned by the repository owner. Each image also contains a status which is managed by the repository owner, who can define images as either Stable or Unstable . This is designed to quickly help users know when an image is undergoing a state of instability which is known by the owner. Why a snapshot? In short, Docker Hub's API is very slow. It would not be a good long-term solution to just proxy the responses from Docker Hub and translate the data into something considered useful by downstream clients. By caching the image information in its own database, Fleet is able to more efficiently return the status data for each image and repository. In doing so, it is also able to provide more concise data, such as image versions, as part of the primary response, rather than requiring users to make a separate call. As an example comparison between obtaining all image name, pull and version information for all LinuxServer images from Docker Hub, and then obtaining that same data via Fleet's API: API Time (ms) Docker Hub (multiple calls) 52000ms Fleet 50ms Capabilities Fleet has the ability to display images with a particular state, which provides contextual information to visitors of the application's main page. Hidden If an image is hidden, it will not be displayed as part of the main list, nor will it be returned as part of any API calls. This also means that the pull count of a hidden image is not included. Unstable Marks an image as having issues known by the maintainer. A useful state to assign to an image if the latest build (or builds) are causing downstream breakages. This may also be useful if an upstream dependency or application is causing breakages in the image directly. Deprecated If the maintainer of the image, or upstream application no longer wishes to provide support, or if the image has reached its end-of-life (or has been superseded by another), marking an image as deprecated will ensure users are made aware that no further updates will be supplied, and should stop using it. Deprecation notices are also provided to give context. API Fleet exposes a single API endpoint which can be used to obtain image list and pull count information for all relevant images maintained by the repository {% api-method method=\"get\" host=\"https://fleet.linuxserver.io\" path=\"/api/v1/images\" %} {% api-method-summary %} Get All Repositories and Images {% endapi-method-summary %} {% api-method-description %} Returns all synchronized images. {% endapi-method-description %} {% api-method-spec %} {% api-method-request %} {% api-method-response %} {% api-method-response-example httpCode=200 %} {% api-method-response-example-description %} All synchronized repositories and images returned. {% endapi-method-response-example-description %} { \"status\": \"OK\", \"data\" { \"totalPullCount\": 1862494227, \"repositories\": { \"lsiobase\": [ { \"name\": \"alpine\", \"pullCount\": 4275970, \"version\": \"3.6\", \"stable\": true }, { \"name\": \"alpine.arm64\", \"pullCount\": 66234, \"version\": \"edge\", \"stable\": true }, ... ], \"linuxserver\": [ { \"name\": \"airsonic\", \"pullCount\": 4608329, \"version\": \"v10.2.1\", \"stable\": true }, { \"name\": \"apache\", \"pullCount\": 3011699, \"version\": \"latest\", \"stable\": true }, ... ] ... } } } {% endapi-method-response-example %} {% endapi-method-response %} {% endapi-method-spec %} {% endapi-method %} {% hint style=\"info\" %} Any repositories not synchronized with Docker Hub (e.g. staging or metadata repositories) will not be returned as part of the API. This also applies to images which the repository owner does not wish to be part of the primary image list. {% endhint %} Running Fleet {% hint style=\"warning\" %} Fleet is a Java application and requires at least JRE 11. {% endhint %} Grab the latest Fleet release from GitHub . SQL Fleet stores its data in a MariaDB database which you need to provide. In order for the application to manage its tables and procedures, the user you provide it needs to have the relevant GRANT permissions to the fleet database. The following script should be sufficient to get the initial database set up. CREATE SCHEMA `fleet`; CREATE USER 'fleet_user' IDENTIFIED BY 'supersecretpassword'; GRANT ALL ON `fleet`.* TO 'fleet_user'; The username and password that you define must then be provided as part of Fleet's configuration. Configuration File All primary configuration for Fleet at runtime is loaded in via a fleet.properties file. This can be located anywhere on the file system, and is loaded in via a Runtime argument: # Runtime fleet.app.port=8080 # Database Connectivity fleet.database.driver=org.mariadb.jdbc.Driver fleet.database.url=jdbc:mariadb://:3306/fleet fleet.database.username= fleet.database.password= # Password security fleet.admin.secret= All configuration can be loaded either via the config file, via JVM arguments, or via the system environment. Fleet will first look in the configuration file, then JVM runtime, and finally in the system environment. It will load the first value it finds, which can be useful when needing to override specific properties. {% hint style=\"info\" %} If you place a property in the system environment, ensure that the property uses underscores rather than periods. This is due to a limitation in BASH environments where exported variables must not contain this character. E.g. fleet.app.port=8080 becomes export fleet_app_port=8080 {% endhint %} Property Name Purpose fleet.app.port The port which the application will be running under. fleet.admin.secret A string used as part of the password key derivation process. This secret is prepended to the raw password before its key is derived, providing further pseudo-randomness to hashed passwords. Once set, this must not be changed! It is vital to remain the same, as it will be used during the password verification step. If Fleet is restarted with this removed or set differently, the password verification process will fail because previously hashed passwords will have been derived with the old secret. fleet.database.driver The driver to use for connections to Fleet's database. This should be org.mariadb.jdbc.Driver fleet.database.url The full JDBC connection string to the database. fleet.database.username The username of the SQL user which will be managing the data in the Fleet database. This should have full GRANT access to the fleet database as it also manages any database migrations. fleet.database.password The password for the SQL user Runtime Arguments As well as the base configuration file, Fleet also supports some runtime arguments by means of the -D flag. These can be used to direct Fleet to behave in a specific way at runtime. {% hint style=\"info\" %} Unlike the properties defined above, these properties are only accessed via the JVM arguments ( -D ). {% endhint %} Runtime Argument Purpose fleet.config.base The absolute path of the configuration file. fleet.show.passwords Tells fleet to show passwords in plain text in its logs. Not recommended . fleet.nuke.database Be very careful. This will tell Fleet to completely wipe and rebuild its database. This can be useful if the owner deems the database to be too far out of synchronisation with Docker Hub, or if images have since been removed but are still showing in Fleet. fleet.skip.sync.on.startup By default, Fleet will run a synchronisation process when it first starts up. Setting this flag will tell it to skip the first run. The next synchronisation will be at the set interval. Default User When starting Fleet for the first time it will create a default user in order for you to log in and manage the repositories/images synchronised by the application. The default username and password are: Username : admin Password : admin {% hint style=\"warning\" %} You should change the default password for this user as soon as possible! This can be done via the Admin -> Users menu options. {% endhint %}","title":"Fleet"},{"location":"general/fleet/#fleet","text":"","title":"Fleet"},{"location":"general/fleet/#how-fleet-works","text":"Fleet stores a snapshot of Docker Images in its own database, consisting of metadata deemed most pertinent to both the users of the images, and the repository owner. It will synchronize with Docker Hub over a set interval in order to update its stored data. It then displays this snapshot data on its own status page as a useful list, containing links to each repository and image owned by the repository owner. Each image also contains a status which is managed by the repository owner, who can define images as either Stable or Unstable . This is designed to quickly help users know when an image is undergoing a state of instability which is known by the owner.","title":"How Fleet works"},{"location":"general/fleet/#why-a-snapshot","text":"In short, Docker Hub's API is very slow. It would not be a good long-term solution to just proxy the responses from Docker Hub and translate the data into something considered useful by downstream clients. By caching the image information in its own database, Fleet is able to more efficiently return the status data for each image and repository. In doing so, it is also able to provide more concise data, such as image versions, as part of the primary response, rather than requiring users to make a separate call. As an example comparison between obtaining all image name, pull and version information for all LinuxServer images from Docker Hub, and then obtaining that same data via Fleet's API: API Time (ms) Docker Hub (multiple calls) 52000ms Fleet 50ms","title":"Why a snapshot?"},{"location":"general/fleet/#capabilities","text":"Fleet has the ability to display images with a particular state, which provides contextual information to visitors of the application's main page.","title":"Capabilities"},{"location":"general/fleet/#hidden","text":"If an image is hidden, it will not be displayed as part of the main list, nor will it be returned as part of any API calls. This also means that the pull count of a hidden image is not included.","title":"Hidden"},{"location":"general/fleet/#unstable","text":"Marks an image as having issues known by the maintainer. A useful state to assign to an image if the latest build (or builds) are causing downstream breakages. This may also be useful if an upstream dependency or application is causing breakages in the image directly.","title":"Unstable"},{"location":"general/fleet/#deprecated","text":"If the maintainer of the image, or upstream application no longer wishes to provide support, or if the image has reached its end-of-life (or has been superseded by another), marking an image as deprecated will ensure users are made aware that no further updates will be supplied, and should stop using it. Deprecation notices are also provided to give context.","title":"Deprecated"},{"location":"general/fleet/#api","text":"Fleet exposes a single API endpoint which can be used to obtain image list and pull count information for all relevant images maintained by the repository {% api-method method=\"get\" host=\"https://fleet.linuxserver.io\" path=\"/api/v1/images\" %} {% api-method-summary %} Get All Repositories and Images {% endapi-method-summary %} {% api-method-description %} Returns all synchronized images. {% endapi-method-description %} {% api-method-spec %} {% api-method-request %} {% api-method-response %} {% api-method-response-example httpCode=200 %} {% api-method-response-example-description %} All synchronized repositories and images returned. {% endapi-method-response-example-description %} { \"status\": \"OK\", \"data\" { \"totalPullCount\": 1862494227, \"repositories\": { \"lsiobase\": [ { \"name\": \"alpine\", \"pullCount\": 4275970, \"version\": \"3.6\", \"stable\": true }, { \"name\": \"alpine.arm64\", \"pullCount\": 66234, \"version\": \"edge\", \"stable\": true }, ... ], \"linuxserver\": [ { \"name\": \"airsonic\", \"pullCount\": 4608329, \"version\": \"v10.2.1\", \"stable\": true }, { \"name\": \"apache\", \"pullCount\": 3011699, \"version\": \"latest\", \"stable\": true }, ... ] ... } } } {% endapi-method-response-example %} {% endapi-method-response %} {% endapi-method-spec %} {% endapi-method %} {% hint style=\"info\" %} Any repositories not synchronized with Docker Hub (e.g. staging or metadata repositories) will not be returned as part of the API. This also applies to images which the repository owner does not wish to be part of the primary image list. {% endhint %}","title":"API"},{"location":"general/fleet/#running-fleet","text":"{% hint style=\"warning\" %} Fleet is a Java application and requires at least JRE 11. {% endhint %} Grab the latest Fleet release from GitHub .","title":"Running Fleet"},{"location":"general/fleet/#sql","text":"Fleet stores its data in a MariaDB database which you need to provide. In order for the application to manage its tables and procedures, the user you provide it needs to have the relevant GRANT permissions to the fleet database. The following script should be sufficient to get the initial database set up. CREATE SCHEMA `fleet`; CREATE USER 'fleet_user' IDENTIFIED BY 'supersecretpassword'; GRANT ALL ON `fleet`.* TO 'fleet_user'; The username and password that you define must then be provided as part of Fleet's configuration.","title":"SQL"},{"location":"general/fleet/#configuration-file","text":"All primary configuration for Fleet at runtime is loaded in via a fleet.properties file. This can be located anywhere on the file system, and is loaded in via a Runtime argument: # Runtime fleet.app.port=8080 # Database Connectivity fleet.database.driver=org.mariadb.jdbc.Driver fleet.database.url=jdbc:mariadb://:3306/fleet fleet.database.username= fleet.database.password= # Password security fleet.admin.secret= All configuration can be loaded either via the config file, via JVM arguments, or via the system environment. Fleet will first look in the configuration file, then JVM runtime, and finally in the system environment. It will load the first value it finds, which can be useful when needing to override specific properties. {% hint style=\"info\" %} If you place a property in the system environment, ensure that the property uses underscores rather than periods. This is due to a limitation in BASH environments where exported variables must not contain this character. E.g. fleet.app.port=8080 becomes export fleet_app_port=8080 {% endhint %} Property Name Purpose fleet.app.port The port which the application will be running under. fleet.admin.secret A string used as part of the password key derivation process. This secret is prepended to the raw password before its key is derived, providing further pseudo-randomness to hashed passwords. Once set, this must not be changed! It is vital to remain the same, as it will be used during the password verification step. If Fleet is restarted with this removed or set differently, the password verification process will fail because previously hashed passwords will have been derived with the old secret. fleet.database.driver The driver to use for connections to Fleet's database. This should be org.mariadb.jdbc.Driver fleet.database.url The full JDBC connection string to the database. fleet.database.username The username of the SQL user which will be managing the data in the Fleet database. This should have full GRANT access to the fleet database as it also manages any database migrations. fleet.database.password The password for the SQL user","title":"Configuration File"},{"location":"general/fleet/#runtime-arguments","text":"As well as the base configuration file, Fleet also supports some runtime arguments by means of the -D flag. These can be used to direct Fleet to behave in a specific way at runtime. {% hint style=\"info\" %} Unlike the properties defined above, these properties are only accessed via the JVM arguments ( -D ). {% endhint %} Runtime Argument Purpose fleet.config.base The absolute path of the configuration file. fleet.show.passwords Tells fleet to show passwords in plain text in its logs. Not recommended . fleet.nuke.database Be very careful. This will tell Fleet to completely wipe and rebuild its database. This can be useful if the owner deems the database to be too far out of synchronisation with Docker Hub, or if images have since been removed but are still showing in Fleet. fleet.skip.sync.on.startup By default, Fleet will run a synchronisation process when it first starts up. Setting this flag will tell it to skip the first run. The next synchronisation will be at the set interval.","title":"Runtime Arguments"},{"location":"general/fleet/#default-user","text":"When starting Fleet for the first time it will create a default user in order for you to log in and manage the repositories/images synchronised by the application. The default username and password are: Username : admin Password : admin {% hint style=\"warning\" %} You should change the default password for this user as soon as possible! This can be done via the Admin -> Users menu options. {% endhint %}","title":"Default User"},{"location":"general/running-our-containers/","text":"Running LinuxServer Containers Image Structure Base Images We have curated various base images which our main application images derive from. This is beneficial for two main reasons: A common dependency base between multiple images, reducing the likelihood of variation between two or more applications that share the same dependencies. Reduction in image footprint on your host machine by fully utilising Docker's image layering system. Multiple containers running locally that share the same base image will reuse that image and any of its ancestors. The /config volume To help reduce variation between our images, we have adopted a common structure pattern for application config and dependent directories. This means that each image has its own internal /config directory which holds all application-specific configuration. With the exception of a small number of images, all of our images expose this volume. We do this because we believe that it makes it easier to answer the common question of \"where does the application data get persisted?\" - the answer being \"always in /config \". If you don't map this directory when creating your containers, the config will only last as long as the lifespan of the container itself! Creating a Container To create a container from one of our images, you must use either docker create or docker run . Each image follows the same pattern in the command when creating a container: docker create \\ --name= \\ -v :/config \\ -e PUID= \\ -e PGID= \\ -p : \\ linuxserver/","title":"Running LinuxServer Containers"},{"location":"general/running-our-containers/#running-linuxserver-containers","text":"","title":"Running LinuxServer Containers"},{"location":"general/running-our-containers/#image-structure","text":"","title":"Image Structure"},{"location":"general/running-our-containers/#base-images","text":"We have curated various base images which our main application images derive from. This is beneficial for two main reasons: A common dependency base between multiple images, reducing the likelihood of variation between two or more applications that share the same dependencies. Reduction in image footprint on your host machine by fully utilising Docker's image layering system. Multiple containers running locally that share the same base image will reuse that image and any of its ancestors.","title":"Base Images"},{"location":"general/running-our-containers/#the-config-volume","text":"To help reduce variation between our images, we have adopted a common structure pattern for application config and dependent directories. This means that each image has its own internal /config directory which holds all application-specific configuration. With the exception of a small number of images, all of our images expose this volume. We do this because we believe that it makes it easier to answer the common question of \"where does the application data get persisted?\" - the answer being \"always in /config \". If you don't map this directory when creating your containers, the config will only last as long as the lifespan of the container itself!","title":"The /config volume"},{"location":"general/running-our-containers/#creating-a-container","text":"To create a container from one of our images, you must use either docker create or docker run . Each image follows the same pattern in the command when creating a container: docker create \\ --name= \\ -v :/config \\ -e PUID= \\ -e PGID= \\ -p : \\ linuxserver/","title":"Creating a Container"},{"location":"general/swag/","text":"The goal of this guide is to give you ideas on what can be accomplished with the LinuxServer SWAG docker image and to get you started. We will explain some of the basic concepts and limitations, and then we'll provide you with common examples. If you have further questions, you can ask on our forum or join our Discord for conversations: https://discord.gg/YWrKVTn Table of Contents Introduction What are SSL certs? What is Let's Encrypt (and/or ZeroSSL)? Creating a SWAG container Docker cli Docker compose Authorization method Cert provider (Let's Encrypt vs ZeroSSL) Port forwards Docker networking Container setup examples Create container via http validation Create container via dns validation with a wildcard cert Create container via duckdns validation with a wildcard cert Web hosting examples Simple html web page hosting Hosting a Wordpress site Reverse Proxy Preset proxy confs Understanding the proxy conf structure Subdomain proxy conf Subfolder proxy conf Ombi subdomain reverse proxy example Nextcloud subdomain reverse proxy example Plex subfolder reverse proxy example Using Heimdall as the home page at domain root Troubleshooting Common errors 404 502 Final Thoughts How to Request Support Introduction What are SSL certs? SSL certs allow users of a service to communicate via encrypted data transmitted up and down. Third party trusted certs also allow users to make sure that the remote service they are connecting to is really who they say they are and not someone else in the middle. When we run a web server for reasons like hosting websites or reverse proxying services on our own domain, we need to set it up with third party trusted ssl certs so client browsers trust it and communicate with it securely. When you connect to a website with a trusted cert, most browsers show a padlock icon next to the address bar to indicate that. Without a trusted cert (ie. with self signed cert) most browsers show warning pages or may block access to the website as the website identity cannot be confirmed via a trusted third party. What is Let's Encrypt (and/or ZeroSSL)? In the past, the common way to get a trusted ssl cert was to contact one of the providers, send them the relevant info to prove ownership of a domain and pay for the service. Nowadays, with Let's Encrypt and ZeroSSL , one can get free certs via automated means. The SWAG docker image , published and maintained by LinuxServer.io , makes setting up a full-fledged web server with auto generated and renewed ssl certs very easy. It is essentially an nginx webserver with php7, fail2ban (intrusion prevention) and Let's Encrypt cert validation built-in. It is just MySQL short of a LEMP stack and therefore is best paired with our MariaDB docker image . Creating a SWAG container Most of the initial settings for getting a webserver with ssl certs up are done through the docker run/create or compose yaml parameters. Here's a list of all the settings available including the optional ones. It is safe to remove unnecessary parameters for different scenarios. docker cli docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=yourdomain.url \\ -e SUBDOMAINS=www, \\ -e VALIDATION=http \\ -e CERTPROVIDER= `#optional` \\ -e DNSPLUGIN=cloudflare `#optional` \\ -e DUCKDNSTOKEN= `#optional` \\ -e EMAIL= `#optional` \\ -e ONLY_SUBDOMAINS=false `#optional` \\ -e EXTRA_DOMAINS= `#optional` \\ -e STAGING=false `#optional` \\ -p 443:443 \\ -p 80:80 `#optional` \\ -v :/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag docker-compose Compatible with docker-compose v2 schemas. --- version: \"2.1\" services: swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=yourdomain.url - SUBDOMAINS=www, - VALIDATION=http - CERTPROVIDER= #optional - DNSPLUGIN=cloudflare #optional - DUCKDNSTOKEN= #optional - EMAIL= #optional - ONLY_SUBDOMAINS=false #optional - EXTRA_DOMAINS= #optional - STAGING=false #optional volumes: - :/config ports: - 443:443 - 80:80 #optional restart: unless-stopped Authorization method Our image currently supports three different methods to validate domain ownership: http: Let's Encrypt (acme) server connects to domain on port 80 Can be owned domain or a dynamic dns address dns: Let's Encrypt (acme) server connects to dns provider Api credentials and settings entered into ini files under /config/dns-conf/ Supports wildcard certs Need to have own domain name (non-free) duckdns: Let's Encrypt (acme) server connects to DuckDNS Supports wildcard certs (only for the sub-subdomains) No need for own domain (free) The validation is performed when the container is started for the first time. Nginx won't be up until ssl certs are successfully generated. The certs are valid for 90 days. The container will check the cert expiration status every night and if they are to expire within 30 days, it will attempt to auto-renew. If your certs are about to expire in less than 30 days, check the logs under /config/log/letsencrypt to see why the auto-renewals failed. Cert Provider (Let's Encrypt vs ZeroSSL) As of January 2021, SWAG supports getting certs validated by either Let's Encrypt or ZeroSSL . Both services use the ACME protocol as the underlying method to validate ownership. Our Certbot client in the SWAG image is ACME compliant and therefore supports both services. Although very similar, ZeroSSL does (at the time of writing) have a couple of advantages over Let's Encrypt: * ZeroSSL provides unlimited certs via ACME and has no rate limits or throttling (it's quite common for new users to get throttled by Let's Encrypt due to multiple unsuccessful attempts to validate) * ZeroSSL provides a web interface that allows users to list and manage the certs they have received SWAG currently defaults to Let's Encrypt as the cert provider so as not to break existing installs, however users can override that behavior by setting the environment variable CERTPROVIDER=zerossl to retrieve a cert from ZeroSSL instead. The only gotcha is that ZeroSSL requires the EMAIL env var to be set so the certs can be tied to a ZeroSSL account for management over their web interface. Port forwards Port 443 mapping is required for access through https://domain.com . However, you don't necessarily need to have it listen on port 443 on the host server. All that is needed is to have port 443 on the router (wan) somehow forward to port 443 inside the container, while it can go through a different port on the host. For instance, it is ok to have port 443 on router (wan) forward to port 444 on the host, and then map port 444 to port 443 in docker run/create or compose yml. Port 80 forwarding is required for http validation only. Same rule as above applies, and it's OK to go from 80 on the router to 81 on the host, mapped to 80 in the container. Docker networking SWAG container happily runs with bridge networking. However, the default bridge network in docker does not allow containers to connect each other via container names used as dns hostnames. Therefore, it is recommended to first create a user defined bridge network and attach the containers to that network. If you are using docker-compose, and your services are on the same yaml, you do not need to do this, because docker-compose automatically creates a user defined bridge network and attaches each container to it as long as no other networking option is defined in their config. For the below examples, we will use a network named lsio . We can create it via docker network create lsio . After that, any container that is created with --net=lsio can ping each other by container name as dns hostname. Keep in mind that dns hostnames are meant to be case-insensitive, however container names are case-sensitive. For container names to be used as dns hostnames in nginx, they should be all lowercase as nginx will convert them to all lowercase before trying to resolve. Container setup examples Create container via http validation Let's assume our domain name is linuxserver-test.com and we would like our cert to also cover www.linuxserver-test.com and ombi.linuxserver-test.com . On the router, forward ports 80 and 443 to your host server. On your dns provider (if using your own domain), create an A record for the main domain and point it to your server IP (wan). Also create CNAMES for www and ombi and point them to the A record for the domain. With docker cli, we'll first create a user defined bridge network if we haven't already docker network create lsio , and then create the container: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=www,ombi \\ -e VALIDATION=http \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once created, we do docker start swag to start it. With docker compose, we can use the following yml: --- version: \"2.1\" services: swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=www,ombi - VALIDATION=http volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped We can fire up the container with docker-compose up -d After the container is started, we'll watch the logs with docker logs swag -f . After some initial initialization, we will see the validation steps. After all the steps, it should print Server ready in the logs. Now we can browse to https://www.linuxserver-test.com and we'll see the default landing page displayed. Create container via dns validation with a wildcard cert Let's assume our domain name is linuxserver-test.com and we would like our cert to also cover www.linuxserver-test.com , ombi.linuxserver-test.com and any other subdomain possible. On the router, we'll forward port 443 to our host server (Port 80 forwarding is optional). We'll need to make sure that we are using a dns provider that is supported by this image. Currently the following dns plugins are supported: cloudflare , cloudxns , digitalocean , dnsimple , dnsmadeeasy , google , luadns , nsone , ovh , rfc2136 and route53 . Your dns provider by default is the provider of your domain name and if they are not supported, it is very easy to switch to a different dns provider. Cloudflare is recommended due to being free and reliable. To switch to Cloudflare, you can register for a free account and follow their steps to point the nameservers to Cloudflare. The rest of the instructions assume that we are using the cloudflare dns plugin. On our dns provider, we'll create an A record for the main domain and point it to our server IP (wan). We'll also create a CNAME for * and point it to the A record for the domain. On Cloudflare, we'll click on the orange cloud to turn it grey so that it is dns only and not cached/proxied by Cloudflare, which would add more complexities. Now, let's get the container set up. With docker cli, we'll first create a user defined bridge network if we haven't already docker network create lsio , and then create the container: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag And we start the container via docker start swag With docker compose, we'll use: --- version: \"2.1\" services: swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped Then we'll fire up the container via docker-compose up -d After the container is started, we'll watch the logs with docker logs swag -f . After some init steps, we'll notice that the container will give an error during validation due to wrong credentials. That's because we didn't enter the correct credentials for the Cloudflare api yet. We can browse to the location /config/dns-conf which is mapped from the host location (according to above settings) /home/aptalca/appdata/swag/dns-conf/ and edit the correct ini file for our dns provider. For Cloudflare, we'll enter our e-mail address and the api key. The api key can be retrieved by going to the Overview page and clicking on Get your API key link. We'll need the Global API Key . Once we enter the credentials into the ini file, we'll restart the docker container via docker restart swag and again watch the logs. After successful validation, we should see the notice Server ready and our webserver should be up and accessible at https://www.linuxserver-test.com . Create container via duckdns validation with a wildcard cert We will first need to get a subdomain from DuckDNS . Let's assume we get linuxserver-test so our url will be linuxserver-test.duckdns.org . Then we'll need to make sure that the subdomain points to our server IP (wan) on the DuckDNS website. We can always use our DuckDNS docker image to keep the IP up to date. Don't forget to get the token for your account from DuckDNS. On the router, we'll forward port 443 to our host server (Port 80 forward is optional). Now, let's get the container set up. With docker cli, we'll first create a user defined bridge network if we haven't already docker network create lsio , and then create the container: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.duckdns.org \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=duckdns \\ -e DUCKDNSTOKEN=97654867496t0877648659765854 \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag And we start the container via docker start swag With docker compose, we'll use: --- version: \"2.1\" services: swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.duckdns.org - SUBDOMAINS=wildcard - VALIDATION=duckdns - DUCKDNSTOKEN=97654867496t0877648659765854 volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped Then we'll fire up the container via docker-compose up -d After the container is started, we'll watch the logs with docker logs swag -f . We'll see some initialization and then we will see the validation steps. After all the steps, it should print Server ready in the logs. Now we can access the webserver by browsing to https://www.linuxserver-test.duckdns.org . NOTICE: Due to a DuckDNS limitation, our cert only covers the wildcard subdomains, but it doesn't cover the main url. So if we try to access https://linuxserver-test.duckdns.org , we'll see a browser warning about an invalid ssl cert. But accessing it through the www (or ombi or any other) subdomain should work fine. Web hosting examples Simple html web page hosting Once we have a working container, we can drop our web documents in and modify the nginx config files to set up our webserver. All the necessary files are under /config which is mapped from the host location (set by above examples) /home/aptalca/appdata/swag . We can drop all of our web/html files into /config/www . The main site config nginx uses can be found at /config/nginx/site-confs/default . Don't delete this file, as it will be regenerated on container restart, but feel free to modify as needed. By default, it is listening on port 443, and the root folder is set to /config/www , so if you drop a page1.html into that location, it will be accessible at https://linuxserver-test.com/page1.html . To enable listening on port 80 and automatically redirecting to port 443 for enforcing ssl, uncomment the lines at the top of the default site config so it reads: # redirect all traffic to https server { listen 80; listen [::]:80; server_name _; return 301 https://$host$request_uri; } After any changes to the config files, simply restart the container via docker restart swag to reload the nginx config. Hosting a Wordpress site Wordpress requires a mysql database. For that, we'll use the linuxserver MariaDB docker image . Here's a docker compose stack to get both containers set up. For this exercise, we'll utilize the cloudflare dns plugin for Let's Encrypt validation, but you can use any other method to set it up as described in this linked section : --- version: \"2.1\" services: mariadb: image: lscr.io/linuxserver/mariadb container_name: mariadb environment: - PUID=1000 - PGID=1000 - MYSQL_ROOT_PASSWORD=mariadbpassword - TZ=Europe/London - MYSQL_DATABASE=WP_database - MYSQL_USER=WP_dbuser - MYSQL_PASSWORD=WP_dbpassword volumes: - /home/aptalca/appdata/mariadb:/config restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 depends_on: - mariadb restart: unless-stopped And here are the docker cli versions (make sure you already created the lsio network as described above : Mariadb: docker create \\ --name=mariadb \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e MYSQL_ROOT_PASSWORD=mariadbpassword \\ -e TZ=Europe/London \\ -e MYSQL_DATABASE=WP_database \\ -e MYSQL_USER=WP_dbuser \\ -e MYSQL_PASSWORD=WP_dbpassword \\ -v /home/aptalca/appdata/mariadb:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/mariadb SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once the SWAG container is set up with ssl certs and the webserver is up, we'll download the latest Wordpress and untar it into our www folder: wget https://wordpress.org/latest.tar.gz tar xvf latest.tar.gz -C /home/aptalca/appdata/swag/www/ rm latest.tar.gz Now that we have all the wordpress files under the container's /config/www/wordpress folder, we'll change the root directive in our SWAG default site conf to point there. We'll find the line in /config/nginx/site-confs/default that reads root /config/www; and change it to root /config/www/wordpress; and restart SWAG. Now we should be able to access our wordpress config page at https://linuxserver-test.com/wp-admin/install.php . We'll go ahead and enter mariadb as the Database Host address (we are using the container name as the dns hostname since both containers are in the same user defined bridge network), and also enter the Database Name, user and password we used in the mariadb config above ( WP_database , WP_dbuser and WP_dbpassword ). Once we go through the rest of the install steps, our wordpress instance should be fully set up and available at https://linuxserver-test.com . If you would like to have http requests on port 80 enabled and auto redirected to https on port 443, uncomment the relevant lines at the top of the default site config to read: # redirect all traffic to https server { listen 80; listen [::]:80; server_name _; return 301 https://$host$request_uri; } Reverse Proxy A reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as if they originated from the Web server itself (Shamelessly borrowed from another post on our blog ). In this case, a user or a client browser can connect to our SWAG container via https on port 443, request a service such as Ombi, then our SWAG container connects to the ombi container, retrieves the data and passes it on to the client via https with our trusted cert. The connection to ombi is local and does not need to be encrypted, but all communication between our SWAG container and the client browser will be encrypted. Preset proxy confs Our SWAG image comes with a list of preset reverse proxy confs for popular apps and services. They are hosted on Github and are pulled into the /config/nginx/proxy-confs folder as inactive sample files. To activate, one must rename a conf file to remove .sample from the filename and restart the SWAG container. Any proxy conf file in that folder with a name that matches *.subdomain.conf or *.subfolder.conf will be loaded in nginx during container start. Most proxy confs work without any modification, but some may require other changes. All the required changes are listed at the top of each proxy conf. The conf files use container names to reach other containers and therefore the proxied containers should be named the same as listed in our documentation for each container. The conf files also require that the SWAG container is in the same user defined bridge network as the other container so they can reach each other via container name as dns hostnames. Make sure you follow the instructions listed above in the Docker networking section . Understanding the proxy conf structure Subdomain proxy conf Here's the preset proxy conf for Heimdall as a subdomain (ie. https://heimdall.linuxserver-test.com ): # make sure that your dns has a cname set for heimdall server { listen 443 ssl; listen [::]:443 ssl; server_name heimdall.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic \"Restricted\"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app heimdall; set $upstream_port 443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } Let's dissect this conf to look at what each directive or block does. server { } This is our server block. Whenever nginx gets a request from a client, it determines which server block should be processed based on the destination server name, port and other relevant info, and the matching server block determines how nginx handles and responds to the request. listen 443 ssl; listen [::]:443 ssl; This means that only requests coming to port 443 will match this server block. server_name heimdall.*; Only destination addresses that match heimdall.* will match this server block. include /config/nginx/ssl.conf; This directive injects the contents of our ssl.conf file here, which contains all ssl related settings (cert location, ciphers used, etc.). client_max_body_size 0; Removes the size limitation on uploads (default 1MB). # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; Commented out (disabled) by default. When enabled, it will inject the contents of ldap.conf, necessary settings for LDAP auth. # enable for Authelia #include /config/nginx/authelia-server.conf; Commented out (disabled) by default. When enabled, it will inject the contents of authelia-server.conf, necessary settings for Authelia integration. location / { } Location blocks are used for subfolders or paths. After a server block is matched, nginx will look at the subfolder or path requested to match one of the location blocks inside the selected server block. This particular block in our example is for / so it will match any subfolder or path at this address. # enable the next two lines for http auth #auth_basic \"Restricted\"; #auth_basic_user_file /config/nginx/.htpasswd; Commented out (disabled) by default. When enabled, it will use .htpasswd to perform user/pass authentication before allowing access. # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; Commented out (disabled) by default. When enabled, it will use LDAP authentication before allowing access. # enable for Authelia #include /config/nginx/authelia-location.conf; Commented out (disabled) by default. When enabled, it will use Authelia authentication before allowing access. include /config/nginx/proxy.conf; Injects the contents of proxy.conf, which contains various directives and headers that are common for proxied connections. resolver 127.0.0.11 valid=30s; Tells nginx to use the docker dns to resolve the IP address when the container name is used as address in the next line. set $upstream_app heimdall; set $upstream_port 443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; This is a bit of a tricky part. Normally, we could just put in the directive proxy_pass https://heimdall:443; and expect nginx to connect to Heimdall via its container name used as a dns hostname. Although it works for the most part, nginx has an annoying habit. During start, nginx checks all dns hostnames used in proxy_pass statements and if any one of them is not accessible, it refuses to start. We really don't want a stopped proxied container to prevent our webserver from starting up, so we use a trick. If the proxy_pass statement contains a variable instead of a dns hostname , nginx doesn't check whether it's accessible or not during start. So here we are setting 3 variables, one named upstream_app with the value of heimdall , one named $upstream_port , with the value of the internal heimdall port 443 , and one named $upstream_proto with the value set to https . We we use these variables as the address in the proxy_pass directive. That way, if the heimdall container is down for any reason, nginx can still start. When using a variable instead of hostname, we also have to set the resolver to docker dns in the previous line. If the proxied container is not in the same user defined bridge network as SWAG (could be on a remote host, could be using host networking or macvlan), we can change the value of $upstream_app to an IP address instead: set $upstream_app 192.168.1.10; Subfolder proxy conf Here's the preset proxy conf for mytinytodo via a subfolder # works with https://github.com/breakall/mytinytodo-docker # set the mtt_url to 'https://your.domain.com/todo/' in db/config.php location /todo { return 301 $scheme://$host/todo/; } location ^~ /todo/ { # enable the next two lines for http auth #auth_basic \"Restricted\"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia, also enable authelia-server.conf in the default site config #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app mytinytodo; set $upstream_port 80; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port/; } Unlike the subdomain proxy confs, here we do not have a server block. That is because all of the subfolder proxy confs get injected into the main server block of our root domain defined in the default site conf. So here we are only defining the location block for our specific subfolders. Many of the elements are the same as the subdomain ones, so for those you can refer to the previous section. Let's take a look at some of the differences. # works with https://github.com/breakall/mytinytodo-docker # set the mtt_url to 'https://your.domain.com/todo/' in db/config.php These are the instructions to get the tinytodo container ready to work with our reverse proxy. location ^~ /todo { return 301 $scheme://$host/todo/; } Redirects requests for https://linuxserver-test.com/todo to https://linuxserver-test.com/todo/ (added forward slash at the end). location ^~ /todo/ { } Any requests sent to nginx where the destination starts with https://linuxserver-test.com/todo/ will match this location block. set $upstream_app mytinytodo; set $upstream_port 80; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port/; Same as the previous example, we set a variable $upstream_app with the value mytinytodo and tell nginx to use the variable as the address. Keep in mind that the port listed here is the container port because nginx is connecting to this container directly via the docker network. So if our mytinytodo container has a port mapping of -p 8080:80 , we still set $upstream_port variable to 80 . Nginx has an interesting behavior displayed here. Even though we define http://$upstream_mytinytodo:80/ as the address nginx should proxy, nginx actually connects to http://$upstream_mytinytodo:80/todo . Whenever we use a variable as part of the proxy_pass url, nginx automatically appends the defined location (in this case /todo ) to the end of the proxy_pass url before it connects. If we include the subfolder, nginx will try to connect to http://$upstream_mytinytodo:80/todo/todo and will fail. Ombi subdomain reverse proxy example In this example, we will reverse proxy Ombi at the address https://ombi.linuxserver-test.com . First let's make sure that we have a CNAME for ombi set up on our dns provider (a wildcard CNAME * will also cover this) and it is pointing to our A record that points to our server IP. If we are using the docker cli method, we also need to create the user defined bridge network (here named lsio ) as described above . We also need to make sure that port 443 on our router is forwarded to the correct port on our server. Here's a docker compose stack we can use to set up both containers: --- version: \"2.1\" services: ombi: image: lscr.io/linuxserver/ombi container_name: ombi environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /home/aptalca/appdata/ombi:/config ports: - 3579:3579 restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped And here are the docker cli versions: Ombi: docker create \\ --name=ombi \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 3579:3579 \\ -v /home/aptalca/appdata/ombi:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/ombi SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once our containers up and running (and we confirm we can reach the placeholder page at https://linuxserver-test.com ), we simply rename the file ombi.subdomain.conf.sample under /config/nginx/proxy-confs/ to ombi.subdomain.conf and we restart the SWAG container. Now when we browser to https://ombi.linuxserver-test.com we should see the Ombi gui. Nextcloud subdomain reverse proxy example Nextcloud is a bit trickier because the app has various security measures built-in, forcing us to configure certain options manually. As with the other examples, let's make sure that we have a CNAME for nextcloud set up on our dns provider (a wildcard CNAME * will also cover this) and it is pointing to our A record that points to our server IP. If we are using the docker cli method, we also need to create the user defined bridge network (here named lsio ) as described above . For DuckDNS, we do not need to create CNAMES, as all sub-subdomains automatically point to the same IP as our custom subdomain, but we need to make sure that it is the correct IP address for our server. We also need to make sure that port 443 on our router is forwarded to the correct port on our server. In this example we'll use the duckdns wildcard cert, but you can use any Let's Encrypt validation you like as described above Here's a docker compose stack to set up our SWAG, nextcloud and mariadb containers: --- version: \"2.1\" services: nextcloud: image: lscr.io/linuxserver/nextcloud container_name: nextcloud environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /home/aptalca/appdata/nextcloud/config:/config - /home/aptalca/appdata/nextcloud/data:/data depends_on: - mariadb restart: unless-stopped mariadb: image: lscr.io/linuxserver/mariadb container_name: mariadb environment: - PUID=1000 - PGID=1000 - MYSQL_ROOT_PASSWORD=mariadbpassword - TZ=Europe/London - MYSQL_DATABASE=nextcloud - MYSQL_USER=ncuser - MYSQL_PASSWORD=ncpassword volumes: - /home/aptalca/appdata/mariadb:/config restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.duckdns.org - SUBDOMAINS=wildcard - VALIDATION=duckdns - DUCKDNSTOKEN=97654867496t0877648659765854 volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped And here are the docker cli versions: Nextcloud: docker create \\ --name=nextcloud \\ --net=lsio -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -v /home/aptalca/appdata/nextcloud/config:/config \\ -v /home/aptalca/appdata/nextcloud/data:/data \\ --restart unless-stopped \\ lscr.io/linuxserver/nextcloud Mariadb: docker create \\ --name=mariadb \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e MYSQL_ROOT_PASSWORD=mariadbpassword \\ -e TZ=Europe/London \\ -e MYSQL_DATABASE=nextcloud \\ -e MYSQL_USER=ncuser \\ -e MYSQL_PASSWORD=ncpassword \\ -v /home/aptalca/appdata/mariadb:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/mariadb SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.duckdns.org \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=duckdns \\ -e DUCKDNSTOKEN=97654867496t0877648659765854 \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Now we find the file named nextcloud.subdomain.conf.sample under SWAG's /config/nginx/proxy-confs folder and rename it to nextcloud.subdomain.conf , then restart the SWAG container. If this is the first time we are accessing Nextcloud (we've never accessed it locally before), we can simply navigate to https://nextcloud.linuxserver-test.duckdns.org and we should see the Nextcloud set up page. We'll fill out the info, use the mariadb user ncuser and the password we selected in the environment variable ( ncpassword in the above example) and we'll use mariadb as the Database Host address (container name as dns hostname). We should then be able to go through the intro slides and then see the Nextcloud dashboard with our shiny padlock icon next to the address bar. If this is an existing Nextcloud instance, or we set it up locally via the host IP address and local port, Nextcloud will reject proxied connections. In that case, we have to follow the instructions at the top of the nextcloud.subdomain.conf file: # assuming this container is called \"swag\", edit your nextcloud container's config # located at /config/www/nextcloud/config/config.php and add the following lines before the \");\": # 'trusted_proxies' => ['swag'], # 'overwrite.cli.url' => 'https://nextcloud.your-domain.com/', # 'overwritehost' => 'nextcloud.your-domain.com', # 'overwriteprotocol' => 'https', # # Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this: # array ( # 0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it. # 1 => 'nextcloud.your-domain.com', # ), These settings will tell Nextcloud to respond to queries where the destination address is our domain name. If you followed the above directions to set it up for the first time, you only need to add the line 'trusted_proxies' => ['swag'], , otherwise nextcloud 16+ shows a warning about incorrect reverse proxy settings. By default, HSTS is disabled in SWAG config, because it is a bit of a sledgehammer that prevents loading of any http assets on the entire domain. You can enable it in SWAG's ssl.conf . Plex subfolder reverse proxy example In this example, we will set up Plex as a subfolder so it will be accessible at https://linuxserver-test.com/plex . We will initially set up Plex with host networking through its local IP and will connect to it from the same subnet. If we are on a different subnet, or if using a bridge network, we can use the PLEX_CLAIM variable to automatically claim the server with our plex account. Once the Plex server is set up, it is safe to switch it to bridge networking from host. Here's a docker compose stack we can use to set up both containers: --- version: \"2.1\" services: plex: image: lscr.io/linuxserver/plex container_name: plex network_mode: host environment: - PUID=1000 - PGID=1000 - VERSION=docker volumes: - /home/aptalca/appdata/plex:/config - /home/aptalca/tvshows:/data/tvshows - /home/aptalca/movies:/data/movies restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped Here are the docker cli versions: Plex: docker create \\ --name=plex \\ --net=host \\ -e PUID=1000 \\ -e PGID=1000 \\ -e VERSION=docker \\ -v /home/aptalca/appdata/plex:/config \\ -v /home/aptalca/tvshows:/data/tvshows \\ -v /home/aptalca/movies:/data/movies \\ --restart unless-stopped \\ lscr.io/linuxserver/plex SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once the containers are set up, we browse to http://LOCALSERVERIP:32400/web and set up our Plex server with our Plex account. Then we can find the file named plex.subfolder.conf.sample under our SWAG container's /config/nginx/proxy-confs folder and rename it to plex.subfolder.conf . If we are using bridge networking for our plex container, we can restart the SWAG container and we should be able to access Plex at https://linuxserver-test.com/plex . If we are using host networking for our plex container, we will also have to make one modification to the plex.subfolder.conf . We need to find the line that reads proxy_pass http://$upstream_plex:32400; and replace $upstream_plex with our Plex server's local IP address (ie. proxy_pass http://192.168.1.10:32400; ). Then we can restart SWAG and access Plex at https://linuxserver-test.com/plex . If we want Plex to always use our domain to connect (including in mobile apps), we can add our url https://linuxserver-test.com/plex into the Custom server access URLs in Plex server settings. After that, it is OK to turn off remote access in Plex server settings and remove the port forwarding port 32400. After that, all connections to our Plex server will go through SWAG reverse proxy over port 443. Using Heimdall as the home page at domain root In this example, we will set Heimdall as our homepage at domain root so when we navigate to https://linuxserver-test.com we will reach Heimdall. As before, we need to make sure port 443 is properly forwarded to our server. We also need to make sure that if we are using the docker cli method, we need to create a user defined bridge network as defined above . Here's a docker compose stack we can use to set up both containers: --- version: \"2.1\" services: heimdall: image: lscr.io/linuxserver/heimdall container_name: heimdall environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /home/aptalca/appdata/heimdall:/config restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped Here are the docker cli versions: Heimdall: docker create \\ --name=heimdall \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -v /home/aptalca/appdata/heimdall:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/heimdall SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once the containers are set up, we'll find the file named heimdall.subfolder.conf.sample under SWAG's /config/nginx/proxy-confs folder and rename it to heimdall.subfolder.conf . If we look inside that conf file, we'll see that it is set to use location / { , which will cause an issue because there is already a location defined for / inside the default site config for SWAG. So we need to edit the default site config at /config/nginx/site-confs/default and comment out the location block for / inside our main server block so it reads: #location / { # try_files $uri $uri/ /index.html /index.php?$args =404; #} That way, nginx will use the / location block from our heimdall proxy conf instead. After that, when we navigate to https://linuxserver-test.com , we'll see the Heimdall interface. If we want to password protect our new homepage, we can run the following on the host command line to create a new .htpasswd file: docker exec -it swag htpasswd -c /config/nginx/.htpasswd anyusername . After which, we can activate authentication by editing the heimdall.subfolder.conf file to uncomment the relevant lines so it reads: # enable the next two lines for http auth auth_basic \"Restricted\"; auth_basic_user_file /config/nginx/.htpasswd; Troubleshooting We wrote a blogpost for the deprecated letsencrypt image diving into troubleshooting issues regarding dns and port-forwards, which still is a very good resource: blog.linuxserver.io Common errors 404 This error simply means that the resource was not found. Commonly happening when you try to access a subfolder that is not enabled. 502 This error means that nginx can't talk to the application. There is a few common reasons for this: The application and SWAG is not on the same custom docker network Further up we talk about how to set up Docker networking , however there are some other common traps The container name does not match the application name. Covered in the section for Understanding the proxy conf structure You manually changed the port. Also covered in the section for Understanding the proxy conf structure The container originally ran with host networking, or the default bridge. In most cases the contents of /config/nginx/resolver.conf; should be ...resolver 127.0.0.11 valid=30s; , if this is not the case, you can: Delete it, and restart the container to have it regenerate Manually set the content(we wont override it) Final Thoughts This image can be used in many different scenarios as it is a full fledged web server with some bells and whistles added. The above examples should be enough to get you started. For more information, please refer to the official documentation on either Github or Docker Hub . If you have questions or issues, or want to discuss and share ideas, feel free to visit our discord: https://discord.gg/YWrKVTn How to Request Support As you can see in this article, there are many different configurations, therefore we need to understand your exact setup before we can provide support. If you encounter a bug and confirm that it's a bug, please report it on our github thread . If you need help with setting it up, join our discord and upload the following info to a service like pastebin and post the link: Docker run/create or compose yml you used Full docker log ( docker logs swag ) Any relevant conf files (default, nginx.conf or specific proxy conf)","title":"Swag"},{"location":"general/swag/#table-of-contents","text":"Introduction What are SSL certs? What is Let's Encrypt (and/or ZeroSSL)? Creating a SWAG container Docker cli Docker compose Authorization method Cert provider (Let's Encrypt vs ZeroSSL) Port forwards Docker networking Container setup examples Create container via http validation Create container via dns validation with a wildcard cert Create container via duckdns validation with a wildcard cert Web hosting examples Simple html web page hosting Hosting a Wordpress site Reverse Proxy Preset proxy confs Understanding the proxy conf structure Subdomain proxy conf Subfolder proxy conf Ombi subdomain reverse proxy example Nextcloud subdomain reverse proxy example Plex subfolder reverse proxy example Using Heimdall as the home page at domain root Troubleshooting Common errors 404 502 Final Thoughts How to Request Support","title":"Table of Contents"},{"location":"general/swag/#introduction","text":"","title":"Introduction"},{"location":"general/swag/#what-are-ssl-certs","text":"SSL certs allow users of a service to communicate via encrypted data transmitted up and down. Third party trusted certs also allow users to make sure that the remote service they are connecting to is really who they say they are and not someone else in the middle. When we run a web server for reasons like hosting websites or reverse proxying services on our own domain, we need to set it up with third party trusted ssl certs so client browsers trust it and communicate with it securely. When you connect to a website with a trusted cert, most browsers show a padlock icon next to the address bar to indicate that. Without a trusted cert (ie. with self signed cert) most browsers show warning pages or may block access to the website as the website identity cannot be confirmed via a trusted third party.","title":"What are SSL certs?"},{"location":"general/swag/#what-is-lets-encrypt-andor-zerossl","text":"In the past, the common way to get a trusted ssl cert was to contact one of the providers, send them the relevant info to prove ownership of a domain and pay for the service. Nowadays, with Let's Encrypt and ZeroSSL , one can get free certs via automated means. The SWAG docker image , published and maintained by LinuxServer.io , makes setting up a full-fledged web server with auto generated and renewed ssl certs very easy. It is essentially an nginx webserver with php7, fail2ban (intrusion prevention) and Let's Encrypt cert validation built-in. It is just MySQL short of a LEMP stack and therefore is best paired with our MariaDB docker image .","title":"What is Let's Encrypt (and/or ZeroSSL)?"},{"location":"general/swag/#creating-a-swag-container","text":"Most of the initial settings for getting a webserver with ssl certs up are done through the docker run/create or compose yaml parameters. Here's a list of all the settings available including the optional ones. It is safe to remove unnecessary parameters for different scenarios.","title":"Creating a SWAG container"},{"location":"general/swag/#docker-cli","text":"docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=yourdomain.url \\ -e SUBDOMAINS=www, \\ -e VALIDATION=http \\ -e CERTPROVIDER= `#optional` \\ -e DNSPLUGIN=cloudflare `#optional` \\ -e DUCKDNSTOKEN= `#optional` \\ -e EMAIL= `#optional` \\ -e ONLY_SUBDOMAINS=false `#optional` \\ -e EXTRA_DOMAINS= `#optional` \\ -e STAGING=false `#optional` \\ -p 443:443 \\ -p 80:80 `#optional` \\ -v :/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag","title":"docker cli"},{"location":"general/swag/#docker-compose","text":"Compatible with docker-compose v2 schemas. --- version: \"2.1\" services: swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=yourdomain.url - SUBDOMAINS=www, - VALIDATION=http - CERTPROVIDER= #optional - DNSPLUGIN=cloudflare #optional - DUCKDNSTOKEN= #optional - EMAIL= #optional - ONLY_SUBDOMAINS=false #optional - EXTRA_DOMAINS= #optional - STAGING=false #optional volumes: - :/config ports: - 443:443 - 80:80 #optional restart: unless-stopped","title":"docker-compose"},{"location":"general/swag/#authorization-method","text":"Our image currently supports three different methods to validate domain ownership: http: Let's Encrypt (acme) server connects to domain on port 80 Can be owned domain or a dynamic dns address dns: Let's Encrypt (acme) server connects to dns provider Api credentials and settings entered into ini files under /config/dns-conf/ Supports wildcard certs Need to have own domain name (non-free) duckdns: Let's Encrypt (acme) server connects to DuckDNS Supports wildcard certs (only for the sub-subdomains) No need for own domain (free) The validation is performed when the container is started for the first time. Nginx won't be up until ssl certs are successfully generated. The certs are valid for 90 days. The container will check the cert expiration status every night and if they are to expire within 30 days, it will attempt to auto-renew. If your certs are about to expire in less than 30 days, check the logs under /config/log/letsencrypt to see why the auto-renewals failed.","title":"Authorization method"},{"location":"general/swag/#cert-provider-lets-encrypt-vs-zerossl","text":"As of January 2021, SWAG supports getting certs validated by either Let's Encrypt or ZeroSSL . Both services use the ACME protocol as the underlying method to validate ownership. Our Certbot client in the SWAG image is ACME compliant and therefore supports both services. Although very similar, ZeroSSL does (at the time of writing) have a couple of advantages over Let's Encrypt: * ZeroSSL provides unlimited certs via ACME and has no rate limits or throttling (it's quite common for new users to get throttled by Let's Encrypt due to multiple unsuccessful attempts to validate) * ZeroSSL provides a web interface that allows users to list and manage the certs they have received SWAG currently defaults to Let's Encrypt as the cert provider so as not to break existing installs, however users can override that behavior by setting the environment variable CERTPROVIDER=zerossl to retrieve a cert from ZeroSSL instead. The only gotcha is that ZeroSSL requires the EMAIL env var to be set so the certs can be tied to a ZeroSSL account for management over their web interface.","title":"Cert Provider (Let's Encrypt vs ZeroSSL)"},{"location":"general/swag/#port-forwards","text":"Port 443 mapping is required for access through https://domain.com . However, you don't necessarily need to have it listen on port 443 on the host server. All that is needed is to have port 443 on the router (wan) somehow forward to port 443 inside the container, while it can go through a different port on the host. For instance, it is ok to have port 443 on router (wan) forward to port 444 on the host, and then map port 444 to port 443 in docker run/create or compose yml. Port 80 forwarding is required for http validation only. Same rule as above applies, and it's OK to go from 80 on the router to 81 on the host, mapped to 80 in the container.","title":"Port forwards"},{"location":"general/swag/#docker-networking","text":"SWAG container happily runs with bridge networking. However, the default bridge network in docker does not allow containers to connect each other via container names used as dns hostnames. Therefore, it is recommended to first create a user defined bridge network and attach the containers to that network. If you are using docker-compose, and your services are on the same yaml, you do not need to do this, because docker-compose automatically creates a user defined bridge network and attaches each container to it as long as no other networking option is defined in their config. For the below examples, we will use a network named lsio . We can create it via docker network create lsio . After that, any container that is created with --net=lsio can ping each other by container name as dns hostname. Keep in mind that dns hostnames are meant to be case-insensitive, however container names are case-sensitive. For container names to be used as dns hostnames in nginx, they should be all lowercase as nginx will convert them to all lowercase before trying to resolve.","title":"Docker networking"},{"location":"general/swag/#container-setup-examples","text":"","title":"Container setup examples"},{"location":"general/swag/#create-container-via-http-validation","text":"Let's assume our domain name is linuxserver-test.com and we would like our cert to also cover www.linuxserver-test.com and ombi.linuxserver-test.com . On the router, forward ports 80 and 443 to your host server. On your dns provider (if using your own domain), create an A record for the main domain and point it to your server IP (wan). Also create CNAMES for www and ombi and point them to the A record for the domain. With docker cli, we'll first create a user defined bridge network if we haven't already docker network create lsio , and then create the container: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=www,ombi \\ -e VALIDATION=http \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once created, we do docker start swag to start it. With docker compose, we can use the following yml: --- version: \"2.1\" services: swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=www,ombi - VALIDATION=http volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped We can fire up the container with docker-compose up -d After the container is started, we'll watch the logs with docker logs swag -f . After some initial initialization, we will see the validation steps. After all the steps, it should print Server ready in the logs. Now we can browse to https://www.linuxserver-test.com and we'll see the default landing page displayed.","title":"Create container via http validation"},{"location":"general/swag/#create-container-via-dns-validation-with-a-wildcard-cert","text":"Let's assume our domain name is linuxserver-test.com and we would like our cert to also cover www.linuxserver-test.com , ombi.linuxserver-test.com and any other subdomain possible. On the router, we'll forward port 443 to our host server (Port 80 forwarding is optional). We'll need to make sure that we are using a dns provider that is supported by this image. Currently the following dns plugins are supported: cloudflare , cloudxns , digitalocean , dnsimple , dnsmadeeasy , google , luadns , nsone , ovh , rfc2136 and route53 . Your dns provider by default is the provider of your domain name and if they are not supported, it is very easy to switch to a different dns provider. Cloudflare is recommended due to being free and reliable. To switch to Cloudflare, you can register for a free account and follow their steps to point the nameservers to Cloudflare. The rest of the instructions assume that we are using the cloudflare dns plugin. On our dns provider, we'll create an A record for the main domain and point it to our server IP (wan). We'll also create a CNAME for * and point it to the A record for the domain. On Cloudflare, we'll click on the orange cloud to turn it grey so that it is dns only and not cached/proxied by Cloudflare, which would add more complexities. Now, let's get the container set up. With docker cli, we'll first create a user defined bridge network if we haven't already docker network create lsio , and then create the container: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag And we start the container via docker start swag With docker compose, we'll use: --- version: \"2.1\" services: swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped Then we'll fire up the container via docker-compose up -d After the container is started, we'll watch the logs with docker logs swag -f . After some init steps, we'll notice that the container will give an error during validation due to wrong credentials. That's because we didn't enter the correct credentials for the Cloudflare api yet. We can browse to the location /config/dns-conf which is mapped from the host location (according to above settings) /home/aptalca/appdata/swag/dns-conf/ and edit the correct ini file for our dns provider. For Cloudflare, we'll enter our e-mail address and the api key. The api key can be retrieved by going to the Overview page and clicking on Get your API key link. We'll need the Global API Key . Once we enter the credentials into the ini file, we'll restart the docker container via docker restart swag and again watch the logs. After successful validation, we should see the notice Server ready and our webserver should be up and accessible at https://www.linuxserver-test.com .","title":"Create container via dns validation with a wildcard cert"},{"location":"general/swag/#create-container-via-duckdns-validation-with-a-wildcard-cert","text":"We will first need to get a subdomain from DuckDNS . Let's assume we get linuxserver-test so our url will be linuxserver-test.duckdns.org . Then we'll need to make sure that the subdomain points to our server IP (wan) on the DuckDNS website. We can always use our DuckDNS docker image to keep the IP up to date. Don't forget to get the token for your account from DuckDNS. On the router, we'll forward port 443 to our host server (Port 80 forward is optional). Now, let's get the container set up. With docker cli, we'll first create a user defined bridge network if we haven't already docker network create lsio , and then create the container: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.duckdns.org \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=duckdns \\ -e DUCKDNSTOKEN=97654867496t0877648659765854 \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag And we start the container via docker start swag With docker compose, we'll use: --- version: \"2.1\" services: swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.duckdns.org - SUBDOMAINS=wildcard - VALIDATION=duckdns - DUCKDNSTOKEN=97654867496t0877648659765854 volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped Then we'll fire up the container via docker-compose up -d After the container is started, we'll watch the logs with docker logs swag -f . We'll see some initialization and then we will see the validation steps. After all the steps, it should print Server ready in the logs. Now we can access the webserver by browsing to https://www.linuxserver-test.duckdns.org . NOTICE: Due to a DuckDNS limitation, our cert only covers the wildcard subdomains, but it doesn't cover the main url. So if we try to access https://linuxserver-test.duckdns.org , we'll see a browser warning about an invalid ssl cert. But accessing it through the www (or ombi or any other) subdomain should work fine.","title":"Create container via duckdns validation with a wildcard cert"},{"location":"general/swag/#web-hosting-examples","text":"","title":"Web hosting examples"},{"location":"general/swag/#simple-html-web-page-hosting","text":"Once we have a working container, we can drop our web documents in and modify the nginx config files to set up our webserver. All the necessary files are under /config which is mapped from the host location (set by above examples) /home/aptalca/appdata/swag . We can drop all of our web/html files into /config/www . The main site config nginx uses can be found at /config/nginx/site-confs/default . Don't delete this file, as it will be regenerated on container restart, but feel free to modify as needed. By default, it is listening on port 443, and the root folder is set to /config/www , so if you drop a page1.html into that location, it will be accessible at https://linuxserver-test.com/page1.html . To enable listening on port 80 and automatically redirecting to port 443 for enforcing ssl, uncomment the lines at the top of the default site config so it reads: # redirect all traffic to https server { listen 80; listen [::]:80; server_name _; return 301 https://$host$request_uri; } After any changes to the config files, simply restart the container via docker restart swag to reload the nginx config.","title":"Simple html web page hosting"},{"location":"general/swag/#hosting-a-wordpress-site","text":"Wordpress requires a mysql database. For that, we'll use the linuxserver MariaDB docker image . Here's a docker compose stack to get both containers set up. For this exercise, we'll utilize the cloudflare dns plugin for Let's Encrypt validation, but you can use any other method to set it up as described in this linked section : --- version: \"2.1\" services: mariadb: image: lscr.io/linuxserver/mariadb container_name: mariadb environment: - PUID=1000 - PGID=1000 - MYSQL_ROOT_PASSWORD=mariadbpassword - TZ=Europe/London - MYSQL_DATABASE=WP_database - MYSQL_USER=WP_dbuser - MYSQL_PASSWORD=WP_dbpassword volumes: - /home/aptalca/appdata/mariadb:/config restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 depends_on: - mariadb restart: unless-stopped And here are the docker cli versions (make sure you already created the lsio network as described above : Mariadb: docker create \\ --name=mariadb \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e MYSQL_ROOT_PASSWORD=mariadbpassword \\ -e TZ=Europe/London \\ -e MYSQL_DATABASE=WP_database \\ -e MYSQL_USER=WP_dbuser \\ -e MYSQL_PASSWORD=WP_dbpassword \\ -v /home/aptalca/appdata/mariadb:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/mariadb SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once the SWAG container is set up with ssl certs and the webserver is up, we'll download the latest Wordpress and untar it into our www folder: wget https://wordpress.org/latest.tar.gz tar xvf latest.tar.gz -C /home/aptalca/appdata/swag/www/ rm latest.tar.gz Now that we have all the wordpress files under the container's /config/www/wordpress folder, we'll change the root directive in our SWAG default site conf to point there. We'll find the line in /config/nginx/site-confs/default that reads root /config/www; and change it to root /config/www/wordpress; and restart SWAG. Now we should be able to access our wordpress config page at https://linuxserver-test.com/wp-admin/install.php . We'll go ahead and enter mariadb as the Database Host address (we are using the container name as the dns hostname since both containers are in the same user defined bridge network), and also enter the Database Name, user and password we used in the mariadb config above ( WP_database , WP_dbuser and WP_dbpassword ). Once we go through the rest of the install steps, our wordpress instance should be fully set up and available at https://linuxserver-test.com . If you would like to have http requests on port 80 enabled and auto redirected to https on port 443, uncomment the relevant lines at the top of the default site config to read: # redirect all traffic to https server { listen 80; listen [::]:80; server_name _; return 301 https://$host$request_uri; }","title":"Hosting a Wordpress site"},{"location":"general/swag/#reverse-proxy","text":"A reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as if they originated from the Web server itself (Shamelessly borrowed from another post on our blog ). In this case, a user or a client browser can connect to our SWAG container via https on port 443, request a service such as Ombi, then our SWAG container connects to the ombi container, retrieves the data and passes it on to the client via https with our trusted cert. The connection to ombi is local and does not need to be encrypted, but all communication between our SWAG container and the client browser will be encrypted.","title":"Reverse Proxy"},{"location":"general/swag/#preset-proxy-confs","text":"Our SWAG image comes with a list of preset reverse proxy confs for popular apps and services. They are hosted on Github and are pulled into the /config/nginx/proxy-confs folder as inactive sample files. To activate, one must rename a conf file to remove .sample from the filename and restart the SWAG container. Any proxy conf file in that folder with a name that matches *.subdomain.conf or *.subfolder.conf will be loaded in nginx during container start. Most proxy confs work without any modification, but some may require other changes. All the required changes are listed at the top of each proxy conf. The conf files use container names to reach other containers and therefore the proxied containers should be named the same as listed in our documentation for each container. The conf files also require that the SWAG container is in the same user defined bridge network as the other container so they can reach each other via container name as dns hostnames. Make sure you follow the instructions listed above in the Docker networking section .","title":"Preset proxy confs"},{"location":"general/swag/#understanding-the-proxy-conf-structure","text":"","title":"Understanding the proxy conf structure"},{"location":"general/swag/#subdomain-proxy-conf","text":"Here's the preset proxy conf for Heimdall as a subdomain (ie. https://heimdall.linuxserver-test.com ): # make sure that your dns has a cname set for heimdall server { listen 443 ssl; listen [::]:443 ssl; server_name heimdall.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic \"Restricted\"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app heimdall; set $upstream_port 443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } Let's dissect this conf to look at what each directive or block does. server { } This is our server block. Whenever nginx gets a request from a client, it determines which server block should be processed based on the destination server name, port and other relevant info, and the matching server block determines how nginx handles and responds to the request. listen 443 ssl; listen [::]:443 ssl; This means that only requests coming to port 443 will match this server block. server_name heimdall.*; Only destination addresses that match heimdall.* will match this server block. include /config/nginx/ssl.conf; This directive injects the contents of our ssl.conf file here, which contains all ssl related settings (cert location, ciphers used, etc.). client_max_body_size 0; Removes the size limitation on uploads (default 1MB). # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; Commented out (disabled) by default. When enabled, it will inject the contents of ldap.conf, necessary settings for LDAP auth. # enable for Authelia #include /config/nginx/authelia-server.conf; Commented out (disabled) by default. When enabled, it will inject the contents of authelia-server.conf, necessary settings for Authelia integration. location / { } Location blocks are used for subfolders or paths. After a server block is matched, nginx will look at the subfolder or path requested to match one of the location blocks inside the selected server block. This particular block in our example is for / so it will match any subfolder or path at this address. # enable the next two lines for http auth #auth_basic \"Restricted\"; #auth_basic_user_file /config/nginx/.htpasswd; Commented out (disabled) by default. When enabled, it will use .htpasswd to perform user/pass authentication before allowing access. # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; Commented out (disabled) by default. When enabled, it will use LDAP authentication before allowing access. # enable for Authelia #include /config/nginx/authelia-location.conf; Commented out (disabled) by default. When enabled, it will use Authelia authentication before allowing access. include /config/nginx/proxy.conf; Injects the contents of proxy.conf, which contains various directives and headers that are common for proxied connections. resolver 127.0.0.11 valid=30s; Tells nginx to use the docker dns to resolve the IP address when the container name is used as address in the next line. set $upstream_app heimdall; set $upstream_port 443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; This is a bit of a tricky part. Normally, we could just put in the directive proxy_pass https://heimdall:443; and expect nginx to connect to Heimdall via its container name used as a dns hostname. Although it works for the most part, nginx has an annoying habit. During start, nginx checks all dns hostnames used in proxy_pass statements and if any one of them is not accessible, it refuses to start. We really don't want a stopped proxied container to prevent our webserver from starting up, so we use a trick. If the proxy_pass statement contains a variable instead of a dns hostname , nginx doesn't check whether it's accessible or not during start. So here we are setting 3 variables, one named upstream_app with the value of heimdall , one named $upstream_port , with the value of the internal heimdall port 443 , and one named $upstream_proto with the value set to https . We we use these variables as the address in the proxy_pass directive. That way, if the heimdall container is down for any reason, nginx can still start. When using a variable instead of hostname, we also have to set the resolver to docker dns in the previous line. If the proxied container is not in the same user defined bridge network as SWAG (could be on a remote host, could be using host networking or macvlan), we can change the value of $upstream_app to an IP address instead: set $upstream_app 192.168.1.10;","title":"Subdomain proxy conf"},{"location":"general/swag/#subfolder-proxy-conf","text":"Here's the preset proxy conf for mytinytodo via a subfolder # works with https://github.com/breakall/mytinytodo-docker # set the mtt_url to 'https://your.domain.com/todo/' in db/config.php location /todo { return 301 $scheme://$host/todo/; } location ^~ /todo/ { # enable the next two lines for http auth #auth_basic \"Restricted\"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia, also enable authelia-server.conf in the default site config #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app mytinytodo; set $upstream_port 80; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port/; } Unlike the subdomain proxy confs, here we do not have a server block. That is because all of the subfolder proxy confs get injected into the main server block of our root domain defined in the default site conf. So here we are only defining the location block for our specific subfolders. Many of the elements are the same as the subdomain ones, so for those you can refer to the previous section. Let's take a look at some of the differences. # works with https://github.com/breakall/mytinytodo-docker # set the mtt_url to 'https://your.domain.com/todo/' in db/config.php These are the instructions to get the tinytodo container ready to work with our reverse proxy. location ^~ /todo { return 301 $scheme://$host/todo/; } Redirects requests for https://linuxserver-test.com/todo to https://linuxserver-test.com/todo/ (added forward slash at the end). location ^~ /todo/ { } Any requests sent to nginx where the destination starts with https://linuxserver-test.com/todo/ will match this location block. set $upstream_app mytinytodo; set $upstream_port 80; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port/; Same as the previous example, we set a variable $upstream_app with the value mytinytodo and tell nginx to use the variable as the address. Keep in mind that the port listed here is the container port because nginx is connecting to this container directly via the docker network. So if our mytinytodo container has a port mapping of -p 8080:80 , we still set $upstream_port variable to 80 . Nginx has an interesting behavior displayed here. Even though we define http://$upstream_mytinytodo:80/ as the address nginx should proxy, nginx actually connects to http://$upstream_mytinytodo:80/todo . Whenever we use a variable as part of the proxy_pass url, nginx automatically appends the defined location (in this case /todo ) to the end of the proxy_pass url before it connects. If we include the subfolder, nginx will try to connect to http://$upstream_mytinytodo:80/todo/todo and will fail.","title":"Subfolder proxy conf"},{"location":"general/swag/#ombi-subdomain-reverse-proxy-example","text":"In this example, we will reverse proxy Ombi at the address https://ombi.linuxserver-test.com . First let's make sure that we have a CNAME for ombi set up on our dns provider (a wildcard CNAME * will also cover this) and it is pointing to our A record that points to our server IP. If we are using the docker cli method, we also need to create the user defined bridge network (here named lsio ) as described above . We also need to make sure that port 443 on our router is forwarded to the correct port on our server. Here's a docker compose stack we can use to set up both containers: --- version: \"2.1\" services: ombi: image: lscr.io/linuxserver/ombi container_name: ombi environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /home/aptalca/appdata/ombi:/config ports: - 3579:3579 restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped And here are the docker cli versions: Ombi: docker create \\ --name=ombi \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 3579:3579 \\ -v /home/aptalca/appdata/ombi:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/ombi SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once our containers up and running (and we confirm we can reach the placeholder page at https://linuxserver-test.com ), we simply rename the file ombi.subdomain.conf.sample under /config/nginx/proxy-confs/ to ombi.subdomain.conf and we restart the SWAG container. Now when we browser to https://ombi.linuxserver-test.com we should see the Ombi gui.","title":"Ombi subdomain reverse proxy example"},{"location":"general/swag/#nextcloud-subdomain-reverse-proxy-example","text":"Nextcloud is a bit trickier because the app has various security measures built-in, forcing us to configure certain options manually. As with the other examples, let's make sure that we have a CNAME for nextcloud set up on our dns provider (a wildcard CNAME * will also cover this) and it is pointing to our A record that points to our server IP. If we are using the docker cli method, we also need to create the user defined bridge network (here named lsio ) as described above . For DuckDNS, we do not need to create CNAMES, as all sub-subdomains automatically point to the same IP as our custom subdomain, but we need to make sure that it is the correct IP address for our server. We also need to make sure that port 443 on our router is forwarded to the correct port on our server. In this example we'll use the duckdns wildcard cert, but you can use any Let's Encrypt validation you like as described above Here's a docker compose stack to set up our SWAG, nextcloud and mariadb containers: --- version: \"2.1\" services: nextcloud: image: lscr.io/linuxserver/nextcloud container_name: nextcloud environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /home/aptalca/appdata/nextcloud/config:/config - /home/aptalca/appdata/nextcloud/data:/data depends_on: - mariadb restart: unless-stopped mariadb: image: lscr.io/linuxserver/mariadb container_name: mariadb environment: - PUID=1000 - PGID=1000 - MYSQL_ROOT_PASSWORD=mariadbpassword - TZ=Europe/London - MYSQL_DATABASE=nextcloud - MYSQL_USER=ncuser - MYSQL_PASSWORD=ncpassword volumes: - /home/aptalca/appdata/mariadb:/config restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.duckdns.org - SUBDOMAINS=wildcard - VALIDATION=duckdns - DUCKDNSTOKEN=97654867496t0877648659765854 volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped And here are the docker cli versions: Nextcloud: docker create \\ --name=nextcloud \\ --net=lsio -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -v /home/aptalca/appdata/nextcloud/config:/config \\ -v /home/aptalca/appdata/nextcloud/data:/data \\ --restart unless-stopped \\ lscr.io/linuxserver/nextcloud Mariadb: docker create \\ --name=mariadb \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e MYSQL_ROOT_PASSWORD=mariadbpassword \\ -e TZ=Europe/London \\ -e MYSQL_DATABASE=nextcloud \\ -e MYSQL_USER=ncuser \\ -e MYSQL_PASSWORD=ncpassword \\ -v /home/aptalca/appdata/mariadb:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/mariadb SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.duckdns.org \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=duckdns \\ -e DUCKDNSTOKEN=97654867496t0877648659765854 \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Now we find the file named nextcloud.subdomain.conf.sample under SWAG's /config/nginx/proxy-confs folder and rename it to nextcloud.subdomain.conf , then restart the SWAG container. If this is the first time we are accessing Nextcloud (we've never accessed it locally before), we can simply navigate to https://nextcloud.linuxserver-test.duckdns.org and we should see the Nextcloud set up page. We'll fill out the info, use the mariadb user ncuser and the password we selected in the environment variable ( ncpassword in the above example) and we'll use mariadb as the Database Host address (container name as dns hostname). We should then be able to go through the intro slides and then see the Nextcloud dashboard with our shiny padlock icon next to the address bar. If this is an existing Nextcloud instance, or we set it up locally via the host IP address and local port, Nextcloud will reject proxied connections. In that case, we have to follow the instructions at the top of the nextcloud.subdomain.conf file: # assuming this container is called \"swag\", edit your nextcloud container's config # located at /config/www/nextcloud/config/config.php and add the following lines before the \");\": # 'trusted_proxies' => ['swag'], # 'overwrite.cli.url' => 'https://nextcloud.your-domain.com/', # 'overwritehost' => 'nextcloud.your-domain.com', # 'overwriteprotocol' => 'https', # # Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this: # array ( # 0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it. # 1 => 'nextcloud.your-domain.com', # ), These settings will tell Nextcloud to respond to queries where the destination address is our domain name. If you followed the above directions to set it up for the first time, you only need to add the line 'trusted_proxies' => ['swag'], , otherwise nextcloud 16+ shows a warning about incorrect reverse proxy settings. By default, HSTS is disabled in SWAG config, because it is a bit of a sledgehammer that prevents loading of any http assets on the entire domain. You can enable it in SWAG's ssl.conf .","title":"Nextcloud subdomain reverse proxy example"},{"location":"general/swag/#plex-subfolder-reverse-proxy-example","text":"In this example, we will set up Plex as a subfolder so it will be accessible at https://linuxserver-test.com/plex . We will initially set up Plex with host networking through its local IP and will connect to it from the same subnet. If we are on a different subnet, or if using a bridge network, we can use the PLEX_CLAIM variable to automatically claim the server with our plex account. Once the Plex server is set up, it is safe to switch it to bridge networking from host. Here's a docker compose stack we can use to set up both containers: --- version: \"2.1\" services: plex: image: lscr.io/linuxserver/plex container_name: plex network_mode: host environment: - PUID=1000 - PGID=1000 - VERSION=docker volumes: - /home/aptalca/appdata/plex:/config - /home/aptalca/tvshows:/data/tvshows - /home/aptalca/movies:/data/movies restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped Here are the docker cli versions: Plex: docker create \\ --name=plex \\ --net=host \\ -e PUID=1000 \\ -e PGID=1000 \\ -e VERSION=docker \\ -v /home/aptalca/appdata/plex:/config \\ -v /home/aptalca/tvshows:/data/tvshows \\ -v /home/aptalca/movies:/data/movies \\ --restart unless-stopped \\ lscr.io/linuxserver/plex SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once the containers are set up, we browse to http://LOCALSERVERIP:32400/web and set up our Plex server with our Plex account. Then we can find the file named plex.subfolder.conf.sample under our SWAG container's /config/nginx/proxy-confs folder and rename it to plex.subfolder.conf . If we are using bridge networking for our plex container, we can restart the SWAG container and we should be able to access Plex at https://linuxserver-test.com/plex . If we are using host networking for our plex container, we will also have to make one modification to the plex.subfolder.conf . We need to find the line that reads proxy_pass http://$upstream_plex:32400; and replace $upstream_plex with our Plex server's local IP address (ie. proxy_pass http://192.168.1.10:32400; ). Then we can restart SWAG and access Plex at https://linuxserver-test.com/plex . If we want Plex to always use our domain to connect (including in mobile apps), we can add our url https://linuxserver-test.com/plex into the Custom server access URLs in Plex server settings. After that, it is OK to turn off remote access in Plex server settings and remove the port forwarding port 32400. After that, all connections to our Plex server will go through SWAG reverse proxy over port 443.","title":"Plex subfolder reverse proxy example"},{"location":"general/swag/#using-heimdall-as-the-home-page-at-domain-root","text":"In this example, we will set Heimdall as our homepage at domain root so when we navigate to https://linuxserver-test.com we will reach Heimdall. As before, we need to make sure port 443 is properly forwarded to our server. We also need to make sure that if we are using the docker cli method, we need to create a user defined bridge network as defined above . Here's a docker compose stack we can use to set up both containers: --- version: \"2.1\" services: heimdall: image: lscr.io/linuxserver/heimdall container_name: heimdall environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /home/aptalca/appdata/heimdall:/config restart: unless-stopped swag: image: lscr.io/linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - URL=linuxserver-test.com - SUBDOMAINS=wildcard - VALIDATION=dns - DNSPLUGIN=cloudflare volumes: - /home/aptalca/appdata/swag:/config ports: - 443:443 - 80:80 restart: unless-stopped Here are the docker cli versions: Heimdall: docker create \\ --name=heimdall \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -v /home/aptalca/appdata/heimdall:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/heimdall SWAG: docker create \\ --name=swag \\ --cap-add=NET_ADMIN \\ --net=lsio \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e URL=linuxserver-test.com \\ -e SUBDOMAINS=wildcard \\ -e VALIDATION=dns \\ -e DNSPLUGIN=cloudflare \\ -p 443:443 \\ -p 80:80 \\ -v /home/aptalca/appdata/swag:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/swag Once the containers are set up, we'll find the file named heimdall.subfolder.conf.sample under SWAG's /config/nginx/proxy-confs folder and rename it to heimdall.subfolder.conf . If we look inside that conf file, we'll see that it is set to use location / { , which will cause an issue because there is already a location defined for / inside the default site config for SWAG. So we need to edit the default site config at /config/nginx/site-confs/default and comment out the location block for / inside our main server block so it reads: #location / { # try_files $uri $uri/ /index.html /index.php?$args =404; #} That way, nginx will use the / location block from our heimdall proxy conf instead. After that, when we navigate to https://linuxserver-test.com , we'll see the Heimdall interface. If we want to password protect our new homepage, we can run the following on the host command line to create a new .htpasswd file: docker exec -it swag htpasswd -c /config/nginx/.htpasswd anyusername . After which, we can activate authentication by editing the heimdall.subfolder.conf file to uncomment the relevant lines so it reads: # enable the next two lines for http auth auth_basic \"Restricted\"; auth_basic_user_file /config/nginx/.htpasswd;","title":"Using Heimdall as the home page at domain root"},{"location":"general/swag/#troubleshooting","text":"We wrote a blogpost for the deprecated letsencrypt image diving into troubleshooting issues regarding dns and port-forwards, which still is a very good resource: blog.linuxserver.io","title":"Troubleshooting"},{"location":"general/swag/#common-errors","text":"","title":"Common errors"},{"location":"general/swag/#404","text":"This error simply means that the resource was not found. Commonly happening when you try to access a subfolder that is not enabled.","title":"404"},{"location":"general/swag/#502","text":"This error means that nginx can't talk to the application. There is a few common reasons for this: The application and SWAG is not on the same custom docker network Further up we talk about how to set up Docker networking , however there are some other common traps The container name does not match the application name. Covered in the section for Understanding the proxy conf structure You manually changed the port. Also covered in the section for Understanding the proxy conf structure The container originally ran with host networking, or the default bridge. In most cases the contents of /config/nginx/resolver.conf; should be ...resolver 127.0.0.11 valid=30s; , if this is not the case, you can: Delete it, and restart the container to have it regenerate Manually set the content(we wont override it)","title":"502"},{"location":"general/swag/#final-thoughts","text":"This image can be used in many different scenarios as it is a full fledged web server with some bells and whistles added. The above examples should be enough to get you started. For more information, please refer to the official documentation on either Github or Docker Hub . If you have questions or issues, or want to discuss and share ideas, feel free to visit our discord: https://discord.gg/YWrKVTn","title":"Final Thoughts"},{"location":"general/swag/#how-to-request-support","text":"As you can see in this article, there are many different configurations, therefore we need to understand your exact setup before we can provide support. If you encounter a bug and confirm that it's a bug, please report it on our github thread . If you need help with setting it up, join our discord and upload the following info to a service like pastebin and post the link: Docker run/create or compose yml you used Full docker log ( docker logs swag ) Any relevant conf files (default, nginx.conf or specific proxy conf)","title":"How to Request Support"},{"location":"general/understanding-puid-and-pgid/","text":"Understanding PUID and PGID {% hint style=\"info\" %} We are aware that recent versions of the Docker engine have introduced the --user flag. Our images are not yet compatible with this, so we recommend continuing usage of PUID and PGID. {% endhint %} Why use these? Docker runs all of its containers under the root user domain because it requires access to things like network configuration, process management, and your filesystem. This means that the processes running inside your containers also run as root . This kind of elevated access is not ideal for day-to-day use, and potentially gives applications the access to things they shouldn't (although, a strong understanding of volume and port mapping will help with this). Another issue is file management within the container's mapped volumes. If the process is running under root , all files and directories created during the container's lifespan will be owned by root , thus becoming inaccessible by you. Using the PUID and PGID allows our containers to map the container's internal user to a user on the host machine. All of our containers use this method of user mapping and should be applied accordingly. Using the variables When creating a container from one of our images, ensure you use the -e PUID and -e PGID options in your docker command: docker create --name=beets -e PUID=1000 -e PGID=1000 linuxserver/beets Or, if you use docker-compose , add them to the environment: section: environment: - PUID=1000 - PGID=1000 It is most likely that you will use the id of yourself, which can be obtained by running the command below. The two values you will be interested in are the uid and gid . id $user","title":"Understanding PUID and PGID"},{"location":"general/understanding-puid-and-pgid/#understanding-puid-and-pgid","text":"{% hint style=\"info\" %} We are aware that recent versions of the Docker engine have introduced the --user flag. Our images are not yet compatible with this, so we recommend continuing usage of PUID and PGID. {% endhint %}","title":"Understanding PUID and PGID"},{"location":"general/understanding-puid-and-pgid/#why-use-these","text":"Docker runs all of its containers under the root user domain because it requires access to things like network configuration, process management, and your filesystem. This means that the processes running inside your containers also run as root . This kind of elevated access is not ideal for day-to-day use, and potentially gives applications the access to things they shouldn't (although, a strong understanding of volume and port mapping will help with this). Another issue is file management within the container's mapped volumes. If the process is running under root , all files and directories created during the container's lifespan will be owned by root , thus becoming inaccessible by you. Using the PUID and PGID allows our containers to map the container's internal user to a user on the host machine. All of our containers use this method of user mapping and should be applied accordingly.","title":"Why use these?"},{"location":"general/understanding-puid-and-pgid/#using-the-variables","text":"When creating a container from one of our images, ensure you use the -e PUID and -e PGID options in your docker command: docker create --name=beets -e PUID=1000 -e PGID=1000 linuxserver/beets Or, if you use docker-compose , add them to the environment: section: environment: - PUID=1000 - PGID=1000 It is most likely that you will use the id of yourself, which can be obtained by running the command below. The two values you will be interested in are the uid and gid . id $user","title":"Using the variables"},{"location":"general/updating-our-containers/","text":"Updating our containers Our images are updated whenever the upstream application or dependencies get changed, so make sure you're always running the latest version, as they may contain important bug fixes and new features. Steps required to update Docker containers are, for the most part, immutable. This means that important configuration such as volume and port mappings can't be easily changed once the container has been created. The containers created from our images run a very specific version of the application they wrap, so in order to update the application, you must recreate the container. Stop the container Firstly, stop the container. docker stop Remove the container Once the container has been stopped, remove it. Important : Did you remember to persist the /config volume when you originally created the container? Bear in mind, you'll lose any configuration inside the container if this volume was not persisted. Read up on why this is important . docker rm Pull the latest version Now you can pull the latest version of the application image from Docker Hub. docker pull linuxserver/ Recreate the container Finally, you can recreate the container. This is often cited as the most arduous task as it requires you to remember all of the mappings you set beforehand. You can help mitigate this step by using Docker Compose instead - this topic has been outlined in our documentation . docker create \\ --name= \\ -v :/config \\ -e PUID= \\ -e PGID= \\ -p : \\ linuxserver/ Docker Compose It is also possible to update a single container using Docker Compose: docker-compose pull linuxserver/ docker-compose up -d Or, to update all containers at once: docker-compose pull docker-compose up -d Removing old images Whenever a Docker image is updated, a fresh version of that image gets downloaded and stored on your host machine. Doing this, however, does not remove the old version of the image. Eventually you will end up with a lot of disk space used up by stale images. You can prune old images from your system, which will free up space: docker image prune","title":"Updating our containers"},{"location":"general/updating-our-containers/#updating-our-containers","text":"Our images are updated whenever the upstream application or dependencies get changed, so make sure you're always running the latest version, as they may contain important bug fixes and new features.","title":"Updating our containers"},{"location":"general/updating-our-containers/#steps-required-to-update","text":"Docker containers are, for the most part, immutable. This means that important configuration such as volume and port mappings can't be easily changed once the container has been created. The containers created from our images run a very specific version of the application they wrap, so in order to update the application, you must recreate the container.","title":"Steps required to update"},{"location":"general/updating-our-containers/#stop-the-container","text":"Firstly, stop the container. docker stop ","title":"Stop the container"},{"location":"general/updating-our-containers/#remove-the-container","text":"Once the container has been stopped, remove it. Important : Did you remember to persist the /config volume when you originally created the container? Bear in mind, you'll lose any configuration inside the container if this volume was not persisted. Read up on why this is important . docker rm ","title":"Remove the container"},{"location":"general/updating-our-containers/#pull-the-latest-version","text":"Now you can pull the latest version of the application image from Docker Hub. docker pull linuxserver/","title":"Pull the latest version"},{"location":"general/updating-our-containers/#recreate-the-container","text":"Finally, you can recreate the container. This is often cited as the most arduous task as it requires you to remember all of the mappings you set beforehand. You can help mitigate this step by using Docker Compose instead - this topic has been outlined in our documentation . docker create \\ --name= \\ -v :/config \\ -e PUID= \\ -e PGID= \\ -p : \\ linuxserver/","title":"Recreate the container"},{"location":"general/updating-our-containers/#docker-compose","text":"It is also possible to update a single container using Docker Compose: docker-compose pull linuxserver/ docker-compose up -d Or, to update all containers at once: docker-compose pull docker-compose up -d","title":"Docker Compose"},{"location":"general/updating-our-containers/#removing-old-images","text":"Whenever a Docker image is updated, a fresh version of that image gets downloaded and stored on your host machine. Doing this, however, does not remove the old version of the image. Eventually you will end up with a lot of disk space used up by stale images. You can prune old images from your system, which will free up space: docker image prune","title":"Removing old images"},{"location":"general/volumes/","text":"Volumes In Docker terminology, a volume is a storage device that allows you to persist the data used and generated by each of your running containers. While a container remains alive (in either an active or inactive state), the data inside its user-space remains intact. However, if you decide to recreate a container, all data within that container is lost. Volumes are an intrinsic aspect of container management, so it is useful to know how to create them. There are two ways to map persistent storage to your containers; container volumes, and directory overlays. All of our images reference persistent data by means of directory overlays. Mapping a volume to your container Firstly, you must understand which directories from within your container you wish to persist. All of our images come with side-by-side documentation on which internal directories are used by the application. As mentioned in the Running our Containers documentation, the most common directory you will wish to persist is the /config directory. Before you create your container, first create a directory on the host machine that will act as the home for your persisted data. We recommend creating the directory /opt/appdata . Under this tree, you can create a single configuration directory for each of your containers. When creating the container itself, now is the time to make use of the -v flag, which will tell Docker to overlay your host directory over the container's directory: docker create --name my_container \\ -v /opt/appdata/my_config:/config \\ linuxserver/ The above example shows how the usage of -v has mapped the host machine's /opt/appdata/my_config directory over the container's internal /config directory. Remember : When dealing with mapping overlays, it always reads host:container You can do this for as many directories as required by either you or the container itself. Our rule-of-thumb is to always map the /config directory as this contains pertinent runtime configuration for the underlying application. For applications that require further data, such as media, our documentation will clearly indicate which internal directories need mapping.","title":"Volumes"},{"location":"general/volumes/#volumes","text":"In Docker terminology, a volume is a storage device that allows you to persist the data used and generated by each of your running containers. While a container remains alive (in either an active or inactive state), the data inside its user-space remains intact. However, if you decide to recreate a container, all data within that container is lost. Volumes are an intrinsic aspect of container management, so it is useful to know how to create them. There are two ways to map persistent storage to your containers; container volumes, and directory overlays. All of our images reference persistent data by means of directory overlays.","title":"Volumes"},{"location":"general/volumes/#mapping-a-volume-to-your-container","text":"Firstly, you must understand which directories from within your container you wish to persist. All of our images come with side-by-side documentation on which internal directories are used by the application. As mentioned in the Running our Containers documentation, the most common directory you will wish to persist is the /config directory. Before you create your container, first create a directory on the host machine that will act as the home for your persisted data. We recommend creating the directory /opt/appdata . Under this tree, you can create a single configuration directory for each of your containers. When creating the container itself, now is the time to make use of the -v flag, which will tell Docker to overlay your host directory over the container's directory: docker create --name my_container \\ -v /opt/appdata/my_config:/config \\ linuxserver/ The above example shows how the usage of -v has mapped the host machine's /opt/appdata/my_config directory over the container's internal /config directory. Remember : When dealing with mapping overlays, it always reads host:container You can do this for as many directories as required by either you or the container itself. Our rule-of-thumb is to always map the /config directory as this contains pertinent runtime configuration for the underlying application. For applications that require further data, such as media, our documentation will clearly indicate which internal directories need mapping.","title":"Mapping a volume to your container"},{"location":"images/","text":"Images Each of our images requires their own specific configuration before you can begin making use of them. If you're new to our images, please take the time to read through our documentation.","title":"Images"},{"location":"images/#images","text":"Each of our images requires their own specific configuration before you can begin making use of them. If you're new to our images, please take the time to read through our documentation.","title":"Images"},{"location":"images/docker-adguardhome-sync/","text":"linuxserver/adguardhome-sync Adguardhome-sync is a tool to synchronize AdGuardHome config to replica instances. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/adguardhome-sync:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Version Tags This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Stable releases from GitHub Application Setup Edit the adguardhome-sync.yaml with your AdGuardHome instance details, for more information check out AdGuardHome Sync . Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: adguardhome-sync: image: lscr.io/linuxserver/adguardhome-sync:latest container_name: adguardhome-sync environment: - PUID=1000 - PGID=1000 - TZ=America/New_York - CONFIGFILE=/config/adguardhome-sync.yaml #optional volumes: - /path/to/appdata/config:/config ports: - 8080:8080 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=adguardhome-sync \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=America/New_York \\ -e CONFIGFILE=/config/adguardhome-sync.yaml `#optional` \\ -p 8080:8080 \\ -v /path/to/appdata/config:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/adguardhome-sync:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 8080 Port for AdGuardHome Sync's web API. Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=America/New_York Specify a timezone to use EG America/New_York CONFIGFILE=/config/adguardhome-sync.yaml Set a custom config file. Volume Mappings ( -v ) Volume Function /config Contains all relevant configuration files. Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it adguardhome-sync /bin/bash To monitor the logs of the container in realtime: docker logs -f adguardhome-sync Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' adguardhome-sync Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/adguardhome-sync:latest Versions 03.10.22: - Rebase to Alpine 3.16, migrate to s6v3. 18.12.21: - Rebase to Alpine 3.15. 09.08.21: - Rebase to Alpine 3.14. 08.04.21: - Initial Release.","title":"adguardhome-sync"},{"location":"images/docker-adguardhome-sync/#linuxserveradguardhome-sync","text":"Adguardhome-sync is a tool to synchronize AdGuardHome config to replica instances.","title":"linuxserver/adguardhome-sync"},{"location":"images/docker-adguardhome-sync/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/adguardhome-sync:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-adguardhome-sync/#version-tags","text":"This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Stable releases from GitHub","title":"Version Tags"},{"location":"images/docker-adguardhome-sync/#application-setup","text":"Edit the adguardhome-sync.yaml with your AdGuardHome instance details, for more information check out AdGuardHome Sync .","title":"Application Setup"},{"location":"images/docker-adguardhome-sync/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-adguardhome-sync/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: adguardhome-sync: image: lscr.io/linuxserver/adguardhome-sync:latest container_name: adguardhome-sync environment: - PUID=1000 - PGID=1000 - TZ=America/New_York - CONFIGFILE=/config/adguardhome-sync.yaml #optional volumes: - /path/to/appdata/config:/config ports: - 8080:8080 restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-adguardhome-sync/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=adguardhome-sync \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=America/New_York \\ -e CONFIGFILE=/config/adguardhome-sync.yaml `#optional` \\ -p 8080:8080 \\ -v /path/to/appdata/config:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/adguardhome-sync:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-adguardhome-sync/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-adguardhome-sync/#ports-p","text":"Parameter Function 8080 Port for AdGuardHome Sync's web API.","title":"Ports (-p)"},{"location":"images/docker-adguardhome-sync/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=America/New_York Specify a timezone to use EG America/New_York CONFIGFILE=/config/adguardhome-sync.yaml Set a custom config file.","title":"Environment Variables (-e)"},{"location":"images/docker-adguardhome-sync/#volume-mappings-v","text":"Volume Function /config Contains all relevant configuration files.","title":"Volume Mappings (-v)"},{"location":"images/docker-adguardhome-sync/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-adguardhome-sync/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-adguardhome-sync/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-adguardhome-sync/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-adguardhome-sync/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-adguardhome-sync/#support-info","text":"Shell access whilst the container is running: docker exec -it adguardhome-sync /bin/bash To monitor the logs of the container in realtime: docker logs -f adguardhome-sync Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' adguardhome-sync Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/adguardhome-sync:latest","title":"Support Info"},{"location":"images/docker-adguardhome-sync/#versions","text":"03.10.22: - Rebase to Alpine 3.16, migrate to s6v3. 18.12.21: - Rebase to Alpine 3.15. 09.08.21: - Rebase to Alpine 3.14. 08.04.21: - Initial Release.","title":"Versions"},{"location":"images/docker-airsonic-advanced/","text":"linuxserver/airsonic-advanced Airsonic-advanced is a free, web-based media streamer, providing ubiquitious access to your music. Use it to share your music with friends, or to listen to your own music while at work. You can stream to multiple players simultaneously, for instance to one player in your kitchen and another in your living room. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/airsonic-advanced:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u274c Version Tags This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Latest releases of Airsonic-Advanced Application Setup We don't formally support upgrading from Airsonic to Airsonic Advanced, it may or may not work for you and we'd recommend making backups before attempting this. Following the upgrade you may experience a forced rescan of your library so take this into account if you have a lot of files. Please see notes about upgrading from v10 to v11 here Access WebUI at :4040 . Default user/pass is admin/admin Extra java options can be passed with the JAVA_OPTS environment variable, eg -e JAVA_OPTS=\"-Xmx256m -Xms256m\" . For some reverse proxies, you may need to pass JAVA_OPTS=-Dserver.use-forward-headers=true for airsonic to generate the proper URL schemes. Note that if you want to use Airsonic's Java jukebox player , then PGID will need to match the group of your sound device (e.g. /dev/snd ). Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: airsonic-advanced: image: lscr.io/linuxserver/airsonic-advanced:latest container_name: airsonic-advanced environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CONTEXT_PATH= #optional - JAVA_OPTS= #optional volumes: - :/config - :/music - :/playlists - :/podcasts - :/media #optional ports: - 4040:4040 devices: - /dev/snd:/dev/snd #optional restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=airsonic-advanced \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CONTEXT_PATH= `#optional` \\ -e JAVA_OPTS= `#optional` \\ -p 4040:4040 \\ -v :/config \\ -v :/music \\ -v :/playlists \\ -v :/podcasts \\ -v :/media `#optional` \\ --device /dev/snd:/dev/snd `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/airsonic-advanced:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 4040 WebUI Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. CONTEXT_PATH= For setting url-base in reverse proxy setups. JAVA_OPTS= For passing additional java options. Volume Mappings ( -v ) Volume Function /config Configuration file location. /music Location of music. /playlists Location for playlists to be saved to. /podcasts Location of podcasts. /media Location of other media. Device Mappings ( --device ) Parameter Function /dev/snd Only needed to pass your host sound device to Airsonic's Java jukebox player. Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it airsonic-advanced /bin/bash To monitor the logs of the container in realtime: docker logs -f airsonic-advanced Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' airsonic-advanced Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/airsonic-advanced:latest Versions 23.10.22: - Rebase to Alpine 3.16, migrate to s6v3. 25.07.22: - Add vorbis-tools. 02.01.22: - Initial Release.","title":"airsonic-advanced"},{"location":"images/docker-airsonic-advanced/#linuxserverairsonic-advanced","text":"Airsonic-advanced is a free, web-based media streamer, providing ubiquitious access to your music. Use it to share your music with friends, or to listen to your own music while at work. You can stream to multiple players simultaneously, for instance to one player in your kitchen and another in your living room.","title":"linuxserver/airsonic-advanced"},{"location":"images/docker-airsonic-advanced/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/airsonic-advanced:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u274c","title":"Supported Architectures"},{"location":"images/docker-airsonic-advanced/#version-tags","text":"This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Latest releases of Airsonic-Advanced","title":"Version Tags"},{"location":"images/docker-airsonic-advanced/#application-setup","text":"We don't formally support upgrading from Airsonic to Airsonic Advanced, it may or may not work for you and we'd recommend making backups before attempting this. Following the upgrade you may experience a forced rescan of your library so take this into account if you have a lot of files. Please see notes about upgrading from v10 to v11 here Access WebUI at :4040 . Default user/pass is admin/admin Extra java options can be passed with the JAVA_OPTS environment variable, eg -e JAVA_OPTS=\"-Xmx256m -Xms256m\" . For some reverse proxies, you may need to pass JAVA_OPTS=-Dserver.use-forward-headers=true for airsonic to generate the proper URL schemes. Note that if you want to use Airsonic's Java jukebox player , then PGID will need to match the group of your sound device (e.g. /dev/snd ).","title":"Application Setup"},{"location":"images/docker-airsonic-advanced/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-airsonic-advanced/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: airsonic-advanced: image: lscr.io/linuxserver/airsonic-advanced:latest container_name: airsonic-advanced environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CONTEXT_PATH= #optional - JAVA_OPTS= #optional volumes: - :/config - :/music - :/playlists - :/podcasts - :/media #optional ports: - 4040:4040 devices: - /dev/snd:/dev/snd #optional restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-airsonic-advanced/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=airsonic-advanced \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CONTEXT_PATH= `#optional` \\ -e JAVA_OPTS= `#optional` \\ -p 4040:4040 \\ -v :/config \\ -v :/music \\ -v :/playlists \\ -v :/podcasts \\ -v :/media `#optional` \\ --device /dev/snd:/dev/snd `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/airsonic-advanced:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-airsonic-advanced/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-airsonic-advanced/#ports-p","text":"Parameter Function 4040 WebUI","title":"Ports (-p)"},{"location":"images/docker-airsonic-advanced/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. CONTEXT_PATH= For setting url-base in reverse proxy setups. JAVA_OPTS= For passing additional java options.","title":"Environment Variables (-e)"},{"location":"images/docker-airsonic-advanced/#volume-mappings-v","text":"Volume Function /config Configuration file location. /music Location of music. /playlists Location for playlists to be saved to. /podcasts Location of podcasts. /media Location of other media.","title":"Volume Mappings (-v)"},{"location":"images/docker-airsonic-advanced/#device-mappings-device","text":"Parameter Function /dev/snd Only needed to pass your host sound device to Airsonic's Java jukebox player.","title":"Device Mappings (--device)"},{"location":"images/docker-airsonic-advanced/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-airsonic-advanced/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-airsonic-advanced/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-airsonic-advanced/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-airsonic-advanced/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-airsonic-advanced/#support-info","text":"Shell access whilst the container is running: docker exec -it airsonic-advanced /bin/bash To monitor the logs of the container in realtime: docker logs -f airsonic-advanced Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' airsonic-advanced Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/airsonic-advanced:latest","title":"Support Info"},{"location":"images/docker-airsonic-advanced/#versions","text":"23.10.22: - Rebase to Alpine 3.16, migrate to s6v3. 25.07.22: - Add vorbis-tools. 02.01.22: - Initial Release.","title":"Versions"},{"location":"images/docker-airsonic/","text":"DEPRECATION NOTICE This image is deprecated. We will not offer support for this image and it will not be updated. We recommend our airsonic-advanced image instead: https://github.com/linuxserver/docker-airsonic-advanced linuxserver/airsonic Airsonic is a free, web-based media streamer, providing ubiquitious access to your music. Use it to share your music with friends, or to listen to your own music while at work. You can stream to multiple players simultaneously, for instance to one player in your kitchen and another in your living room. Supported Architectures Our images support multiple architectures such as x86-64 , arm64 and armhf . We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/airsonic should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Tag x86-64 amd64-latest arm64 arm64v8-latest armhf arm32v7-latest Application Setup Access WebUI at :4040 . Default user/pass is admin/admin Extra java options can be passed with the JAVA_OPTS environment variable, eg -e JAVA_OPTS=\"-Xmx256m -Xms256m\" . For some reverse proxies, you may need to pass JAVA_OPTS=-Dserver.use-forward-headers=true for airsonic to generate the proper URL schemes. Note that if you want to use Airsonic's Java jukebox player , then PGID will need to match the group of your sound device (e.g. /dev/snd ). Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: airsonic: image: lscr.io/linuxserver/airsonic container_name: airsonic environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CONTEXT_PATH= #optional - JAVA_OPTS= #optional volumes: - :/config - :/music - :/playlists - :/podcasts - :/media #optional ports: - 4040:4040 devices: - /dev/snd:/dev/snd #optional restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=airsonic \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CONTEXT_PATH= `#optional` \\ -e JAVA_OPTS= `#optional` \\ -p 4040:4040 \\ -v :/config \\ -v :/music \\ -v :/playlists \\ -v :/podcasts \\ -v :/media `#optional` \\ --device /dev/snd:/dev/snd `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/airsonic Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 4040 WebUI Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. CONTEXT_PATH= For setting url-base in reverse proxy setups. JAVA_OPTS= For passing additional java options. Volume Mappings ( -v ) Volume Function /config Configuration file location. /music Location of music. /playlists Location for playlists to be saved to. /podcasts Location of podcasts. /media Location of other media. Device Mappings ( --device ) Parameter Function /dev/snd Only needed to pass your host sound device to Airsonic's Java jukebox player. Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it airsonic /bin/bash To monitor the logs of the container in realtime: docker logs -f airsonic Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' airsonic Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/airsonic Versions 13.01.22: - Deprecate in favor of airsonic-advanced. 24.03.19: - Switching to new Base images, shift to arm32v7 tag. 26.01.19: - Add pipeline logic and multi arch. 05.01.19: - Linting fixes. 27.08.18: - Use new inhouse java baseimage for quicker builds. 23.08.18: - Rebase to ubuntu bionic for increased performance across all arch's. 22.04.18: - Add the forgotten JAVA_OPTS to the run command. 29.12.17: - Initial Release.","title":"airsonic"},{"location":"images/docker-airsonic/#deprecation-notice","text":"This image is deprecated. We will not offer support for this image and it will not be updated. We recommend our airsonic-advanced image instead: https://github.com/linuxserver/docker-airsonic-advanced","title":"DEPRECATION NOTICE"},{"location":"images/docker-airsonic/#linuxserverairsonic","text":"Airsonic is a free, web-based media streamer, providing ubiquitious access to your music. Use it to share your music with friends, or to listen to your own music while at work. You can stream to multiple players simultaneously, for instance to one player in your kitchen and another in your living room.","title":"linuxserver/airsonic"},{"location":"images/docker-airsonic/#supported-architectures","text":"Our images support multiple architectures such as x86-64 , arm64 and armhf . We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/airsonic should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Tag x86-64 amd64-latest arm64 arm64v8-latest armhf arm32v7-latest","title":"Supported Architectures"},{"location":"images/docker-airsonic/#application-setup","text":"Access WebUI at :4040 . Default user/pass is admin/admin Extra java options can be passed with the JAVA_OPTS environment variable, eg -e JAVA_OPTS=\"-Xmx256m -Xms256m\" . For some reverse proxies, you may need to pass JAVA_OPTS=-Dserver.use-forward-headers=true for airsonic to generate the proper URL schemes. Note that if you want to use Airsonic's Java jukebox player , then PGID will need to match the group of your sound device (e.g. /dev/snd ).","title":"Application Setup"},{"location":"images/docker-airsonic/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-airsonic/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: airsonic: image: lscr.io/linuxserver/airsonic container_name: airsonic environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CONTEXT_PATH= #optional - JAVA_OPTS= #optional volumes: - :/config - :/music - :/playlists - :/podcasts - :/media #optional ports: - 4040:4040 devices: - /dev/snd:/dev/snd #optional restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-airsonic/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=airsonic \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CONTEXT_PATH= `#optional` \\ -e JAVA_OPTS= `#optional` \\ -p 4040:4040 \\ -v :/config \\ -v :/music \\ -v :/playlists \\ -v :/podcasts \\ -v :/media `#optional` \\ --device /dev/snd:/dev/snd `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/airsonic","title":"docker cli (click here for more info)"},{"location":"images/docker-airsonic/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-airsonic/#ports-p","text":"Parameter Function 4040 WebUI","title":"Ports (-p)"},{"location":"images/docker-airsonic/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. CONTEXT_PATH= For setting url-base in reverse proxy setups. JAVA_OPTS= For passing additional java options.","title":"Environment Variables (-e)"},{"location":"images/docker-airsonic/#volume-mappings-v","text":"Volume Function /config Configuration file location. /music Location of music. /playlists Location for playlists to be saved to. /podcasts Location of podcasts. /media Location of other media.","title":"Volume Mappings (-v)"},{"location":"images/docker-airsonic/#device-mappings-device","text":"Parameter Function /dev/snd Only needed to pass your host sound device to Airsonic's Java jukebox player.","title":"Device Mappings (--device)"},{"location":"images/docker-airsonic/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-airsonic/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-airsonic/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-airsonic/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-airsonic/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-airsonic/#support-info","text":"Shell access whilst the container is running: docker exec -it airsonic /bin/bash To monitor the logs of the container in realtime: docker logs -f airsonic Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' airsonic Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/airsonic","title":"Support Info"},{"location":"images/docker-airsonic/#versions","text":"13.01.22: - Deprecate in favor of airsonic-advanced. 24.03.19: - Switching to new Base images, shift to arm32v7 tag. 26.01.19: - Add pipeline logic and multi arch. 05.01.19: - Linting fixes. 27.08.18: - Use new inhouse java baseimage for quicker builds. 23.08.18: - Rebase to ubuntu bionic for increased performance across all arch's. 22.04.18: - Add the forgotten JAVA_OPTS to the run command. 29.12.17: - Initial Release.","title":"Versions"},{"location":"images/docker-apprise-api/","text":"linuxserver/apprise-api Apprise-api Takes advantage of Apprise through your network with a user-friendly API. Send notifications to more then 65+ services. An incredibly lightweight gateway to Apprise. A production ready micro-service at your disposal. Apprise API was designed to easily fit into existing (and new) eco-systems that are looking for a simple notification solution. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/apprise-api:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: apprise-api: image: lscr.io/linuxserver/apprise-api:latest container_name: apprise-api environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /path/to/config:/config ports: - 8000:8000 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=apprise-api \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 8000:8000 \\ -v /path/to/config:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/apprise-api:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 8000 Port for apprise's interface and API. Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. Volume Mappings ( -v ) Volume Function /config Where config is stored. Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it apprise-api /bin/bash To monitor the logs of the container in realtime: docker logs -f apprise-api Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' apprise-api Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/apprise-api:latest Versions 17.10.22: - Rebase to alpine 3.16, migrate to S6V3 28.02.21: - Rebase to alpine 3.15. 03.11.21: - Increase uWSGI buffer size to 32kb. 16.05.21: - Add linuxserver wheel index. 26.02.21: - Initial Release.","title":"apprise-api"},{"location":"images/docker-apprise-api/#linuxserverapprise-api","text":"Apprise-api Takes advantage of Apprise through your network with a user-friendly API. Send notifications to more then 65+ services. An incredibly lightweight gateway to Apprise. A production ready micro-service at your disposal. Apprise API was designed to easily fit into existing (and new) eco-systems that are looking for a simple notification solution.","title":"linuxserver/apprise-api"},{"location":"images/docker-apprise-api/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/apprise-api:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-apprise-api/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-apprise-api/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: apprise-api: image: lscr.io/linuxserver/apprise-api:latest container_name: apprise-api environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /path/to/config:/config ports: - 8000:8000 restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-apprise-api/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=apprise-api \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 8000:8000 \\ -v /path/to/config:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/apprise-api:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-apprise-api/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-apprise-api/#ports-p","text":"Parameter Function 8000 Port for apprise's interface and API.","title":"Ports (-p)"},{"location":"images/docker-apprise-api/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London.","title":"Environment Variables (-e)"},{"location":"images/docker-apprise-api/#volume-mappings-v","text":"Volume Function /config Where config is stored.","title":"Volume Mappings (-v)"},{"location":"images/docker-apprise-api/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-apprise-api/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-apprise-api/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-apprise-api/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-apprise-api/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-apprise-api/#support-info","text":"Shell access whilst the container is running: docker exec -it apprise-api /bin/bash To monitor the logs of the container in realtime: docker logs -f apprise-api Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' apprise-api Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/apprise-api:latest","title":"Support Info"},{"location":"images/docker-apprise-api/#versions","text":"17.10.22: - Rebase to alpine 3.16, migrate to S6V3 28.02.21: - Rebase to alpine 3.15. 03.11.21: - Increase uWSGI buffer size to 32kb. 16.05.21: - Add linuxserver wheel index. 26.02.21: - Initial Release.","title":"Versions"},{"location":"images/docker-audacity/","text":"linuxserver/audacity Audacity is an easy-to-use, multi-track audio editor and recorder. Developed by a group of volunteers as open source. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/audacity:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u274c armhf \u274c Application Setup The application can be accessed at: http://yourhost:3000/ By default the user/pass is abc/abc, if you change your password or want to login manually to the GUI session for any reason use the following link: http://yourhost:3000/?login=true Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: audacity: image: lscr.io/linuxserver/audacity:latest container_name: audacity environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /path/to/config:/config ports: - 3000:3000 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=audacity \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 3000:3000 \\ -v /path/to/config:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/audacity:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 3000 Audacity desktop gui. Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. Volume Mappings ( -v ) Volume Function /config Users home directory in the container, stores program settings and images Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it audacity /bin/bash To monitor the logs of the container in realtime: docker logs -f audacity Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' audacity Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/audacity:latest Versions 14.09.21: - Use the official appimage, switch to single arch (x86_64). Armhf and aarch64 users can remain on version 3.0.2 but there won't be further updates. 07.04.21: - Initial release.","title":"audacity"},{"location":"images/docker-audacity/#linuxserveraudacity","text":"Audacity is an easy-to-use, multi-track audio editor and recorder. Developed by a group of volunteers as open source.","title":"linuxserver/audacity"},{"location":"images/docker-audacity/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/audacity:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u274c armhf \u274c","title":"Supported Architectures"},{"location":"images/docker-audacity/#application-setup","text":"The application can be accessed at: http://yourhost:3000/ By default the user/pass is abc/abc, if you change your password or want to login manually to the GUI session for any reason use the following link: http://yourhost:3000/?login=true","title":"Application Setup"},{"location":"images/docker-audacity/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-audacity/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: audacity: image: lscr.io/linuxserver/audacity:latest container_name: audacity environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /path/to/config:/config ports: - 3000:3000 restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-audacity/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=audacity \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 3000:3000 \\ -v /path/to/config:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/audacity:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-audacity/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-audacity/#ports-p","text":"Parameter Function 3000 Audacity desktop gui.","title":"Ports (-p)"},{"location":"images/docker-audacity/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London.","title":"Environment Variables (-e)"},{"location":"images/docker-audacity/#volume-mappings-v","text":"Volume Function /config Users home directory in the container, stores program settings and images","title":"Volume Mappings (-v)"},{"location":"images/docker-audacity/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-audacity/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-audacity/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-audacity/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-audacity/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-audacity/#support-info","text":"Shell access whilst the container is running: docker exec -it audacity /bin/bash To monitor the logs of the container in realtime: docker logs -f audacity Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' audacity Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/audacity:latest","title":"Support Info"},{"location":"images/docker-audacity/#versions","text":"14.09.21: - Use the official appimage, switch to single arch (x86_64). Armhf and aarch64 users can remain on version 3.0.2 but there won't be further updates. 07.04.21: - Initial release.","title":"Versions"},{"location":"images/docker-babybuddy/","text":"linuxserver/babybuddy Babybuddy is a buddy for babies! Helps caregivers track sleep, feedings, diaper changes, tummy time and more to learn about and predict baby's needs without (as much) guess work. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/babybuddy:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Application Setup Access the webui at :8000 (or whichever host port is mapped in docker arguments). The default user/pass are admin:admin . By default BabyBuddy uses sqlite3. To use an external database like postgresql or mysql/mariadb instead, you can use the environment variables listed in BabyBuddy docs . Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: babybuddy: image: lscr.io/linuxserver/babybuddy:latest container_name: babybuddy environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com volumes: - /path/to/appdata:/config ports: - 8000:8000 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=babybuddy \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com \\ -p 8000:8000 \\ -v /path/to/appdata:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/babybuddy:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 8000 the port for the web ui Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com Add any address you'd like to access babybuddy at (comma separated, no spaces) Volume Mappings ( -v ) Volume Function /config Contains all relevant configuration and data. Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it babybuddy /bin/bash To monitor the logs of the container in realtime: docker logs -f babybuddy Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' babybuddy Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/babybuddy:latest Versions 23.11.22: - Rebase to Alpine 3.16, migrate to s6v3. Restructure nginx configs ( see changes announcement ). 28.05.22: - Add missing PUID/PGID vars to readme. 03.04.22: - Rebase to alpine-nginx baseimage. Add CSRF_TRUSTED_ORIGINS env var. 11.12.21: - Add py3-mysqlclient for mysql/mariadb. 14.11.21: - Add lxml dependencies (temp fix for amd64 by force compiling lxml). 25.07.21: - Add libpq for postgresql. 08.07.21: - Fix pip install issue. 05.07.21: - Update Gunicorn parameters to prevent WORKER_TIMEOUT issue. 22.06.21: - Initial release.","title":"babybuddy"},{"location":"images/docker-babybuddy/#linuxserverbabybuddy","text":"Babybuddy is a buddy for babies! Helps caregivers track sleep, feedings, diaper changes, tummy time and more to learn about and predict baby's needs without (as much) guess work.","title":"linuxserver/babybuddy"},{"location":"images/docker-babybuddy/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/babybuddy:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-babybuddy/#application-setup","text":"Access the webui at :8000 (or whichever host port is mapped in docker arguments). The default user/pass are admin:admin . By default BabyBuddy uses sqlite3. To use an external database like postgresql or mysql/mariadb instead, you can use the environment variables listed in BabyBuddy docs .","title":"Application Setup"},{"location":"images/docker-babybuddy/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-babybuddy/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: babybuddy: image: lscr.io/linuxserver/babybuddy:latest container_name: babybuddy environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com volumes: - /path/to/appdata:/config ports: - 8000:8000 restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-babybuddy/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=babybuddy \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com \\ -p 8000:8000 \\ -v /path/to/appdata:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/babybuddy:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-babybuddy/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-babybuddy/#ports-p","text":"Parameter Function 8000 the port for the web ui","title":"Ports (-p)"},{"location":"images/docker-babybuddy/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com Add any address you'd like to access babybuddy at (comma separated, no spaces)","title":"Environment Variables (-e)"},{"location":"images/docker-babybuddy/#volume-mappings-v","text":"Volume Function /config Contains all relevant configuration and data.","title":"Volume Mappings (-v)"},{"location":"images/docker-babybuddy/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-babybuddy/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-babybuddy/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-babybuddy/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-babybuddy/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-babybuddy/#support-info","text":"Shell access whilst the container is running: docker exec -it babybuddy /bin/bash To monitor the logs of the container in realtime: docker logs -f babybuddy Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' babybuddy Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/babybuddy:latest","title":"Support Info"},{"location":"images/docker-babybuddy/#versions","text":"23.11.22: - Rebase to Alpine 3.16, migrate to s6v3. Restructure nginx configs ( see changes announcement ). 28.05.22: - Add missing PUID/PGID vars to readme. 03.04.22: - Rebase to alpine-nginx baseimage. Add CSRF_TRUSTED_ORIGINS env var. 11.12.21: - Add py3-mysqlclient for mysql/mariadb. 14.11.21: - Add lxml dependencies (temp fix for amd64 by force compiling lxml). 25.07.21: - Add libpq for postgresql. 08.07.21: - Fix pip install issue. 05.07.21: - Update Gunicorn parameters to prevent WORKER_TIMEOUT issue. 22.06.21: - Initial release.","title":"Versions"},{"location":"images/docker-base-alpine-example/","text":"Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum Linuserver.io forum A custom base image built with Alpine linux and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Docker base alpine example"},{"location":"images/docker-base-alpine-example/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum Linuserver.io forum A custom base image built with Alpine linux and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-base-ubuntu-example/","text":"Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum Linuserver.io forum A custom base image built with Ubuntu cloud image and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Docker base ubuntu example"},{"location":"images/docker-base-ubuntu-example/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum Linuserver.io forum A custom base image built with Ubuntu cloud image and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-alpine-nginx/","text":"Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux , nginx and S6 overlay .. Featuring :- weekly updates security updates The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-alpine-nginx"},{"location":"images/docker-baseimage-alpine-nginx/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux , nginx and S6 overlay .. Featuring :- weekly updates security updates The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-alpine-python/","text":"DEPRECATION NOTICE This image is deprecated. We will not offer support for this image and it will not be updated. We recommend our standard alpine baseimage instead: https://github.com/linuxserver/docker-baseimage-alpine Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux , python2 and S6 overlay .. Featuring :- weekly updates security updates The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-alpine-python"},{"location":"images/docker-baseimage-alpine-python/#deprecation-notice","text":"This image is deprecated. We will not offer support for this image and it will not be updated. We recommend our standard alpine baseimage instead: https://github.com/linuxserver/docker-baseimage-alpine","title":"DEPRECATION NOTICE"},{"location":"images/docker-baseimage-alpine-python/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux , python2 and S6 overlay .. Featuring :- weekly updates security updates The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-alpine/","text":"Contact information:- Type Address/Details Discord Discord IRC libera at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-alpine"},{"location":"images/docker-baseimage-alpine/#contact-information-","text":"Type Address/Details Discord Discord IRC libera at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-arch/","text":"Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-arch"},{"location":"images/docker-baseimage-arch/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-cloud9/","text":"DEPRECATION NOTICE This image is deprecated. We will not offer support for this image and it will not be updated. Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Ubuntu linux and Cloud9 .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-cloud9"},{"location":"images/docker-baseimage-cloud9/#deprecation-notice","text":"This image is deprecated. We will not offer support for this image and it will not be updated.","title":"DEPRECATION NOTICE"},{"location":"images/docker-baseimage-cloud9/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Ubuntu linux and Cloud9 .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-fedora/","text":"Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-fedora"},{"location":"images/docker-baseimage-fedora/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Alpine linux and S6 overlay .. The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-guacgui/","text":"DEPRECATION NOTICE This image is deprecated. We will not offer support for this image and it will not be updated. Contact information: Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum linuxserver/docker-baseimage-guacgui A custom graphical base image built with: * Ubuntu cloud image * S6 overlay * xrdp * xorgxrdp * openbox * guacamole Supported Architectures Our images support multiple architectures such as x86-64 , arm64 and armhf . We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling linuxserver/docker-baseimage-guacgui should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Tag x86-64 amd64-latest arm64 arm64v8-latest armhf arm32v7-latest Usage Here are some example snippets to help you get started creating a container. docker docker create \\ --name=docker-baseimage-guacgui \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e APPNAME=xclock \\ -e GUAC_USER=abc `#optional` \\ -e GUAC_PASS=900150983cd24fb0d6963f7d28e17f72 `#optional` \\ -e GUAC_KEYBOARD_LAYOUT=de-de-qwertz `#optional` \\ -p 8080:8080 \\ -p 3389:3389 \\ -v :/config \\ --restart unless-stopped \\ linuxserver/docker-baseimage-guacgui docker-compose Compatible with docker-compose v2 schemas. --- version: \"2\" services: docker-baseimage-guacgui: image: linuxserver/docker-baseimage-guacgui container_name: docker-baseimage-guacgui environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - APPNAME=xclock - GUAC_USER=abc #optional - GUAC_PASS=900150983cd24fb0d6963f7d28e17f72 #optional - GUAC_KEYBOARD_LAYOUT=de-de-qwertz #optional volumes: - :/config ports: - 8080:8080 - 3389:3389 restart: unless-stopped Parameters Container images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Parameter Function -p 8080 Allows HTTP access to the internal X server. -p 3389 Allows RDP access to the internal X server. -e PUID=1000 for UserID - see below for explanation -e PGID=1000 for GroupID - see below for explanation -e TZ=Europe/London Specify a timezone to use EG Europe/London -e APPNAME=xclock Specify the graphical application name shown on RDP access. -e GUAC_USER=abc Specify the username for guacamole's web interface. -e GUAC_PASS=900150983cd24fb0d6963f7d28e17f72 Specify the password's md5 hash for guacamole's web interface. -e GUAC_KEYBOARD_LAYOUT=de-de-qwertz Specify the used keyboard layout for the RDP session used by the gucamole client. Possible values are \"en-us-qwerty\" (default), de-de-qwertz (German keyboard (qwertz)), fr-fr-azerty (French keyboard (azerty)), fr-ch-qwertz (Swiss French keyboard (qwertz)), it-it-qwerty (Italian keyboard), ja-jp-qwerty (Japanese keyboard) and sv-se-qwerty (Swedish keyboard). -v /config Contains X user's home directory contents. Application Setup This is a baseimage meant to be used as base for graphical applications. Please refer to the example folder for usage. If GUAC_USER and GUAC_PASS are not set, there is no authentication. Passwords can be generated via the following: echo -n password | openssl md5 printf '%s' password | md5sum Please beware this image is not hardened for internet usage. Use a reverse ssl proxy to increase security. The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-guacgui"},{"location":"images/docker-baseimage-guacgui/#deprecation-notice","text":"This image is deprecated. We will not offer support for this image and it will not be updated.","title":"DEPRECATION NOTICE"},{"location":"images/docker-baseimage-guacgui/#contact-information","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum","title":"Contact information:"},{"location":"images/docker-baseimage-guacgui/#linuxserverdocker-baseimage-guacgui","text":"A custom graphical base image built with: * Ubuntu cloud image * S6 overlay * xrdp * xorgxrdp * openbox * guacamole","title":"linuxserver/docker-baseimage-guacgui"},{"location":"images/docker-baseimage-guacgui/#supported-architectures","text":"Our images support multiple architectures such as x86-64 , arm64 and armhf . We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling linuxserver/docker-baseimage-guacgui should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Tag x86-64 amd64-latest arm64 arm64v8-latest armhf arm32v7-latest","title":"Supported Architectures"},{"location":"images/docker-baseimage-guacgui/#usage","text":"Here are some example snippets to help you get started creating a container.","title":"Usage"},{"location":"images/docker-baseimage-guacgui/#docker","text":"docker create \\ --name=docker-baseimage-guacgui \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e APPNAME=xclock \\ -e GUAC_USER=abc `#optional` \\ -e GUAC_PASS=900150983cd24fb0d6963f7d28e17f72 `#optional` \\ -e GUAC_KEYBOARD_LAYOUT=de-de-qwertz `#optional` \\ -p 8080:8080 \\ -p 3389:3389 \\ -v :/config \\ --restart unless-stopped \\ linuxserver/docker-baseimage-guacgui","title":"docker"},{"location":"images/docker-baseimage-guacgui/#docker-compose","text":"Compatible with docker-compose v2 schemas. --- version: \"2\" services: docker-baseimage-guacgui: image: linuxserver/docker-baseimage-guacgui container_name: docker-baseimage-guacgui environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - APPNAME=xclock - GUAC_USER=abc #optional - GUAC_PASS=900150983cd24fb0d6963f7d28e17f72 #optional - GUAC_KEYBOARD_LAYOUT=de-de-qwertz #optional volumes: - :/config ports: - 8080:8080 - 3389:3389 restart: unless-stopped","title":"docker-compose"},{"location":"images/docker-baseimage-guacgui/#parameters","text":"Container images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Parameter Function -p 8080 Allows HTTP access to the internal X server. -p 3389 Allows RDP access to the internal X server. -e PUID=1000 for UserID - see below for explanation -e PGID=1000 for GroupID - see below for explanation -e TZ=Europe/London Specify a timezone to use EG Europe/London -e APPNAME=xclock Specify the graphical application name shown on RDP access. -e GUAC_USER=abc Specify the username for guacamole's web interface. -e GUAC_PASS=900150983cd24fb0d6963f7d28e17f72 Specify the password's md5 hash for guacamole's web interface. -e GUAC_KEYBOARD_LAYOUT=de-de-qwertz Specify the used keyboard layout for the RDP session used by the gucamole client. Possible values are \"en-us-qwerty\" (default), de-de-qwertz (German keyboard (qwertz)), fr-fr-azerty (French keyboard (azerty)), fr-ch-qwertz (Swiss French keyboard (qwertz)), it-it-qwerty (Italian keyboard), ja-jp-qwerty (Japanese keyboard) and sv-se-qwerty (Swedish keyboard). -v /config Contains X user's home directory contents.","title":"Parameters"},{"location":"images/docker-baseimage-guacgui/#application-setup","text":"This is a baseimage meant to be used as base for graphical applications. Please refer to the example folder for usage. If GUAC_USER and GUAC_PASS are not set, there is no authentication. Passwords can be generated via the following: echo -n password | openssl md5 printf '%s' password | md5sum Please beware this image is not hardened for internet usage. Use a reverse ssl proxy to increase security. The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Application Setup"},{"location":"images/docker-baseimage-gui/","text":"DEPRECATION NOTICE This image is deprecated. We will not offer support for this image and it will not be updated. Contact information: Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum linuxserver/docker-baseimage-gui A custom graphical base image built with: * Ubuntu cloud image * S6 overlay * xrdp * xorgxrdp * openbox Supported Architectures Our images support multiple architectures such as x86-64 , arm64 and armhf . We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lsiobase/nginx should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Tag x86-64 amd64-latest arm64 arm64v8-latest armhf arm32v7-latest Usage Here is an example to help you get started creating a graphical container. Dockerfile #Firefox via RDP FROM lsiobase/ubuntu-gui:amd64-latest ######################################### ## ENVIRONMENTAL CONFIG ## ######################################### # Set correct environment variables ENV TERM=\"xterm\" APPNAME=\"firefox\" ARG DEBIAN_FRONTEND=noninteractive ######################################### ## INSTALL DEPENDENCIES ## ######################################### RUN apt-get update \\ && apt-get -y upgrade \\ && apt-get install -qy --no-install-recommends \\ firefox \\ && apt-get clean -y \\ && apt-get autoremove -y \\ && rm -rf /tmp/* /var/tmp/* \\ && rm -rf /var/lib/apt/lists/* COPY root / servicefile #!/bin/execlineb -P # ./root/etc/service.d/firefox/run # Redirect stderr to stdout. fdmove -c 2 1 # Wait until openbox is running if { s6-svwait -t 10000 -U /var/run/s6/services/openbox/ } # Drop privileges and set env s6-setuidgid abc s6-env DISPLAY=:1 HOME=/config # Execute Firefox /usr/bin/firefox Access the Graphical Interface Use an RDP client such as: * Remmina * Microsoft Remote Deskotp Client The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-gui"},{"location":"images/docker-baseimage-gui/#deprecation-notice","text":"This image is deprecated. We will not offer support for this image and it will not be updated.","title":"DEPRECATION NOTICE"},{"location":"images/docker-baseimage-gui/#contact-information","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum","title":"Contact information:"},{"location":"images/docker-baseimage-gui/#linuxserverdocker-baseimage-gui","text":"A custom graphical base image built with: * Ubuntu cloud image * S6 overlay * xrdp * xorgxrdp * openbox","title":"linuxserver/docker-baseimage-gui"},{"location":"images/docker-baseimage-gui/#supported-architectures","text":"Our images support multiple architectures such as x86-64 , arm64 and armhf . We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lsiobase/nginx should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Tag x86-64 amd64-latest arm64 arm64v8-latest armhf arm32v7-latest","title":"Supported Architectures"},{"location":"images/docker-baseimage-gui/#usage","text":"Here is an example to help you get started creating a graphical container.","title":"Usage"},{"location":"images/docker-baseimage-gui/#dockerfile","text":"#Firefox via RDP FROM lsiobase/ubuntu-gui:amd64-latest ######################################### ## ENVIRONMENTAL CONFIG ## ######################################### # Set correct environment variables ENV TERM=\"xterm\" APPNAME=\"firefox\" ARG DEBIAN_FRONTEND=noninteractive ######################################### ## INSTALL DEPENDENCIES ## ######################################### RUN apt-get update \\ && apt-get -y upgrade \\ && apt-get install -qy --no-install-recommends \\ firefox \\ && apt-get clean -y \\ && apt-get autoremove -y \\ && rm -rf /tmp/* /var/tmp/* \\ && rm -rf /var/lib/apt/lists/* COPY root /","title":"Dockerfile"},{"location":"images/docker-baseimage-gui/#servicefile","text":"#!/bin/execlineb -P # ./root/etc/service.d/firefox/run # Redirect stderr to stdout. fdmove -c 2 1 # Wait until openbox is running if { s6-svwait -t 10000 -U /var/run/s6/services/openbox/ } # Drop privileges and set env s6-setuidgid abc s6-env DISPLAY=:1 HOME=/config # Execute Firefox /usr/bin/firefox","title":"servicefile"},{"location":"images/docker-baseimage-gui/#access-the-graphical-interface","text":"Use an RDP client such as: * Remmina * Microsoft Remote Deskotp Client The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Access the Graphical Interface"},{"location":"images/docker-baseimage-mono/","text":"Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Ubuntu cloud image , mono and S6 overlay .. Featuring :- weekly updates security updates The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-mono"},{"location":"images/docker-baseimage-mono/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Ubuntu cloud image , mono and S6 overlay .. Featuring :- weekly updates security updates The following line is only in this repo for loop testing: - { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-rdesktop-web/","text":"Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Ubuntu linux and xrdp The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-rdesktop-web"},{"location":"images/docker-baseimage-rdesktop-web/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Ubuntu linux and xrdp The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-baseimage-rdesktop/","text":"Contact information:- Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Ubuntu linux and xrdp The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"baseimage-rdesktop"},{"location":"images/docker-baseimage-rdesktop/#contact-information-","text":"Type Address/Details Discord Discord IRC freenode at #linuxserver.io more information at:- IRC Forum LinuxServer.io forum A custom base image built with Ubuntu linux and xrdp The following line is only in this repo for loop testing: { date: \"01.01.50:\", desc: \"I am the release message for this internal repo.\" }","title":"Contact information:-"},{"location":"images/docker-bazarr/","text":"linuxserver/bazarr Bazarr is a companion application to Sonarr and Radarr. It can manage and download subtitles based on your requirements. You define your preferences by TV show or movie and Bazarr takes care of everything for you. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/bazarr:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Version Tags This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Stable releases from Bazarr development \u2705 Pre-releases from Bazarr Application Setup Once running the URL will be http://:6767 . You must complete all the setup parameters in the webui before you can save the config. Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: bazarr: image: lscr.io/linuxserver/bazarr:latest container_name: bazarr environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /path/to/bazarr/config:/config - /path/to/movies:/movies #optional - /path/to/tv:/tv #optional ports: - 6767:6767 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=bazarr \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 6767:6767 \\ -v /path/to/bazarr/config:/config \\ -v /path/to/movies:/movies `#optional` \\ -v /path/to/tv:/tv `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/bazarr:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 6767 Allows HTTP access to the internal webserver. Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London Volume Mappings ( -v ) Volume Function /config Bazarr data /movies Location of your movies /tv Location of your TV Shows Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it bazarr /bin/bash To monitor the logs of the container in realtime: docker logs -f bazarr Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' bazarr Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/bazarr:latest Versions 11.10.22: - Rebase master branch to Alpine 3.16, migrate to s6v3. 15.15.21: - Temp fix for lxml, compile from scratch to avoid broken official wheel. 25.10.21: - Rebase to alpine 3.14. Fix numpy wheel. 22.10.21: - Added openblas package to prevent numpy error. 16.05.21: - Use wheel index. 19.04.21: - Install from release zip. 07.04.21: - Move app to /app/bazarr/bin, add package_info . 23.01.21: - Rebasing to alpine 3.13. 23.01.21: - Deprecate UMASK_SET in favor of UMASK in baseimage, see above for more information. 01.06.20: - Rebasing to alpine 3.12. 13.05.20: - Add donation links for Bazarr to Github sponsors button and container log. 08.04.20: - Removed /movies and /tv volumes from Dockerfiles. 28.12.19: - Upgrade to Python 3. 19.12.19: - Rebasing to alpine 3.11. 28.06.19: - Rebasing to alpine 3.10. 13.06.19: - Add env variable for setting umask. 12.06.19: - Swap to install deps using maintainers requirements.txt, add ffmpeg for ffprobe. 17.04.19: - Add default UTC timezone if user does not set it. 23.03.19: - Switching to new Base images, shift to arm32v7 tag. 22.02.19: - Rebasing to alpine 3.9. 11.09.18: - Initial release.","title":"bazarr"},{"location":"images/docker-bazarr/#linuxserverbazarr","text":"Bazarr is a companion application to Sonarr and Radarr. It can manage and download subtitles based on your requirements. You define your preferences by TV show or movie and Bazarr takes care of everything for you.","title":"linuxserver/bazarr"},{"location":"images/docker-bazarr/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/bazarr:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-bazarr/#version-tags","text":"This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Stable releases from Bazarr development \u2705 Pre-releases from Bazarr","title":"Version Tags"},{"location":"images/docker-bazarr/#application-setup","text":"Once running the URL will be http://:6767 . You must complete all the setup parameters in the webui before you can save the config.","title":"Application Setup"},{"location":"images/docker-bazarr/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-bazarr/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: bazarr: image: lscr.io/linuxserver/bazarr:latest container_name: bazarr environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - /path/to/bazarr/config:/config - /path/to/movies:/movies #optional - /path/to/tv:/tv #optional ports: - 6767:6767 restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-bazarr/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=bazarr \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 6767:6767 \\ -v /path/to/bazarr/config:/config \\ -v /path/to/movies:/movies `#optional` \\ -v /path/to/tv:/tv `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/bazarr:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-bazarr/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-bazarr/#ports-p","text":"Parameter Function 6767 Allows HTTP access to the internal webserver.","title":"Ports (-p)"},{"location":"images/docker-bazarr/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London","title":"Environment Variables (-e)"},{"location":"images/docker-bazarr/#volume-mappings-v","text":"Volume Function /config Bazarr data /movies Location of your movies /tv Location of your TV Shows","title":"Volume Mappings (-v)"},{"location":"images/docker-bazarr/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-bazarr/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-bazarr/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-bazarr/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-bazarr/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-bazarr/#support-info","text":"Shell access whilst the container is running: docker exec -it bazarr /bin/bash To monitor the logs of the container in realtime: docker logs -f bazarr Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' bazarr Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/bazarr:latest","title":"Support Info"},{"location":"images/docker-bazarr/#versions","text":"11.10.22: - Rebase master branch to Alpine 3.16, migrate to s6v3. 15.15.21: - Temp fix for lxml, compile from scratch to avoid broken official wheel. 25.10.21: - Rebase to alpine 3.14. Fix numpy wheel. 22.10.21: - Added openblas package to prevent numpy error. 16.05.21: - Use wheel index. 19.04.21: - Install from release zip. 07.04.21: - Move app to /app/bazarr/bin, add package_info . 23.01.21: - Rebasing to alpine 3.13. 23.01.21: - Deprecate UMASK_SET in favor of UMASK in baseimage, see above for more information. 01.06.20: - Rebasing to alpine 3.12. 13.05.20: - Add donation links for Bazarr to Github sponsors button and container log. 08.04.20: - Removed /movies and /tv volumes from Dockerfiles. 28.12.19: - Upgrade to Python 3. 19.12.19: - Rebasing to alpine 3.11. 28.06.19: - Rebasing to alpine 3.10. 13.06.19: - Add env variable for setting umask. 12.06.19: - Swap to install deps using maintainers requirements.txt, add ffmpeg for ffprobe. 17.04.19: - Add default UTC timezone if user does not set it. 23.03.19: - Switching to new Base images, shift to arm32v7 tag. 22.02.19: - Rebasing to alpine 3.9. 11.09.18: - Initial release.","title":"Versions"},{"location":"images/docker-beets/","text":"linuxserver/beets Beets is a music library manager and not, for the most part, a music player. It does include a simple player plugin and an experimental Web-based player, but it generally leaves actual sound-reproduction to specialized tools. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/beets:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Version Tags This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Stable Beets Releases nightly \u2705 Built against head of Beets git, generally considered unstable but a likely choice for power users of the application. Application Setup Edit the config file in /config To edit the config from within the container use beet config -e For a command prompt as user abc docker exec -it -u abc beets bash See Beets for more info. Contains beets-extrafiles plugin, configuration details Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: beets: image: lscr.io/linuxserver/beets:latest container_name: beets environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - :/config - :/music - :/downloads ports: - 8337:8337 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=beets \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 8337:8337 \\ -v :/config \\ -v :/music \\ -v :/downloads \\ --restart unless-stopped \\ lscr.io/linuxserver/beets:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 8337 Application WebUI Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. Volume Mappings ( -v ) Volume Function /config Configuration files. /music Music library /downloads Non processed music Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it beets /bin/bash To monitor the logs of the container in realtime: docker logs -f beets Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' beets Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/beets:latest Versions 15.01.22: - Rebasing to alpine 3.15. 19.12.19: - Rebasing to alpine 3.11. 28.06.19: - Rebasing to alpine 3.10. 12.05.19: - Add flac and mp3val binaries required for badfiles plugin. 12.04.19: - Rebase to Alpine 3.9. 23.03.19: - Switching to new Base images, shift to arm32v7 tag. 11.03.19: - Swap copyartifacts for extrafiles, update endpoints with nightly tag. 01.03.19: - Switch to python3. 07.02.19: - Add fftw-dev build dependency for chromaprint. 28.01.19: - Add pipeline logic and multi arch. 15.08.18: - Rebase to alpine 3.8, use alpine repo version of pylast. 12.08.18: - Add requests pip package. 04.03.18: - Upgrade mp3gain to 1.6.1. 02.01.18: - Deprecate cpu_core routine lack of scaling. 27.12.17: - Add beautifulsoup4 pip package. 06.12.17: - Rebase to alpine linux 3.7. 25.05.17: - Rebase to alpine linux 3.6. 06.02.17: - Rebase to alpine linux 3.5. 16.01.17: - Add packages required for replaygain. 24.12.16: - Add beets-copyartifacts plugin. 07.12.16: - Edit cmake options for chromaprint, should now build and install fpcalc, add gstreamer lib 14.10.16: - Add version layer information. 01.10.16: - Add nano and editor variable to allow editing of the config from the container command line. 30.09.16: - Fix umask. 24.09.16: - Rebase to alpine linux. 10.09.16: - Add layer badges to README. 05.01.16: - Change ffpmeg repository, other version crashes container 06.11.15: - Initial Release 29.11.15: - Take out term setting, causing issues with key entry for some users","title":"beets"},{"location":"images/docker-beets/#linuxserverbeets","text":"Beets is a music library manager and not, for the most part, a music player. It does include a simple player plugin and an experimental Web-based player, but it generally leaves actual sound-reproduction to specialized tools.","title":"linuxserver/beets"},{"location":"images/docker-beets/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/beets:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-beets/#version-tags","text":"This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Stable Beets Releases nightly \u2705 Built against head of Beets git, generally considered unstable but a likely choice for power users of the application.","title":"Version Tags"},{"location":"images/docker-beets/#application-setup","text":"Edit the config file in /config To edit the config from within the container use beet config -e For a command prompt as user abc docker exec -it -u abc beets bash See Beets for more info. Contains beets-extrafiles plugin, configuration details","title":"Application Setup"},{"location":"images/docker-beets/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-beets/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: beets: image: lscr.io/linuxserver/beets:latest container_name: beets environment: - PUID=1000 - PGID=1000 - TZ=Europe/London volumes: - :/config - :/music - :/downloads ports: - 8337:8337 restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-beets/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=beets \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -p 8337:8337 \\ -v :/config \\ -v :/music \\ -v :/downloads \\ --restart unless-stopped \\ lscr.io/linuxserver/beets:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-beets/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-beets/#ports-p","text":"Parameter Function 8337 Application WebUI","title":"Ports (-p)"},{"location":"images/docker-beets/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London.","title":"Environment Variables (-e)"},{"location":"images/docker-beets/#volume-mappings-v","text":"Volume Function /config Configuration files. /music Music library /downloads Non processed music","title":"Volume Mappings (-v)"},{"location":"images/docker-beets/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-beets/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-beets/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-beets/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-beets/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-beets/#support-info","text":"Shell access whilst the container is running: docker exec -it beets /bin/bash To monitor the logs of the container in realtime: docker logs -f beets Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' beets Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/beets:latest","title":"Support Info"},{"location":"images/docker-beets/#versions","text":"15.01.22: - Rebasing to alpine 3.15. 19.12.19: - Rebasing to alpine 3.11. 28.06.19: - Rebasing to alpine 3.10. 12.05.19: - Add flac and mp3val binaries required for badfiles plugin. 12.04.19: - Rebase to Alpine 3.9. 23.03.19: - Switching to new Base images, shift to arm32v7 tag. 11.03.19: - Swap copyartifacts for extrafiles, update endpoints with nightly tag. 01.03.19: - Switch to python3. 07.02.19: - Add fftw-dev build dependency for chromaprint. 28.01.19: - Add pipeline logic and multi arch. 15.08.18: - Rebase to alpine 3.8, use alpine repo version of pylast. 12.08.18: - Add requests pip package. 04.03.18: - Upgrade mp3gain to 1.6.1. 02.01.18: - Deprecate cpu_core routine lack of scaling. 27.12.17: - Add beautifulsoup4 pip package. 06.12.17: - Rebase to alpine linux 3.7. 25.05.17: - Rebase to alpine linux 3.6. 06.02.17: - Rebase to alpine linux 3.5. 16.01.17: - Add packages required for replaygain. 24.12.16: - Add beets-copyartifacts plugin. 07.12.16: - Edit cmake options for chromaprint, should now build and install fpcalc, add gstreamer lib 14.10.16: - Add version layer information. 01.10.16: - Add nano and editor variable to allow editing of the config from the container command line. 30.09.16: - Fix umask. 24.09.16: - Rebase to alpine linux. 10.09.16: - Add layer badges to README. 05.01.16: - Change ffpmeg repository, other version crashes container 06.11.15: - Initial Release 29.11.15: - Take out term setting, causing issues with key entry for some users","title":"Versions"},{"location":"images/docker-blender/","text":"linuxserver/blender Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality, and computer games. This image does not support GPU rendering out of the box only accelerated workspace experience Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/blender:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Application Setup The application can be accessed at: * http://yourhost:3000/ By default the user/pass is abc/abc, if you change your password or want to login manually to the GUI session for any reason use the following link: http://yourhost:3000/?login=true You can also force login on the '/' path without this parameter by passing the environment variable -e AUTO_LOGIN=false . Hardware Acceleration This only applies to your desktop experience, this container is capable of supporting accelerated rendering with /dev/dri mounted in, but the AMD HIP and Nvidia CUDA runtimes are massive which are not installed by default in this container. Intel/ATI/AMD To leverage hardware acceleration you will need to mount /dev/dri video device inside of the conainer. --device=/dev/dri:/dev/dri We will automatically ensure the abc user inside of the container has the proper permissions to access this device. Nvidia Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: https://github.com/NVIDIA/nvidia-docker We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all (can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv ). NVIDIA automatically mounts the GPU and drivers from your host into the container. Arm Devices Arm devices can run this image, but generally should not mount in /dev/dri. The OpenGL ES version is not high enough to run Blender. The program can run on these platforms though, leveraging CPU LLVMPipe rendering. Due to lack of arm32/64 binaries from the upstream project, our arm32/64 images install the latest version from the ubuntu repo, which is usually behind and thus the version the image is tagged with does not match the version contained. Keyboard Layouts This should match the layout on the computer you are accessing the container from. The keyboard layouts available for use are: * da-dk-qwerty- Danish keyboard * de-ch-qwertz- Swiss German keyboard (qwertz) * de-de-qwertz- German keyboard (qwertz) - OSK available * en-gb-qwerty- English (UK) keyboard * en-us-qwerty- English (US) keyboard - OSK available DEFAULT * es-es-qwerty- Spanish keyboard - OSK available * fr-ch-qwertz- Swiss French keyboard (qwertz) * fr-fr-azerty- French keyboard (azerty) - OSK available * it-it-qwerty- Italian keyboard - OSK available * ja-jp-qwerty- Japanese keyboard * pt-br-qwerty- Portuguese Brazilian keyboard * sv-se-qwerty- Swedish keyboard * tr-tr-qwerty- Turkish-Q keyboard Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: blender: image: lscr.io/linuxserver/blender:latest container_name: blender security_opt: - seccomp:unconfined #optional environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - SUBFOLDER=/ #optional - KEYBOARD=en-us-qwerty #optional volumes: - /path/to/config:/config ports: - 3000:3000 devices: - /dev/dri:/dev/dri #optional restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=blender \\ --security-opt seccomp=unconfined `#optional` \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e SUBFOLDER=/ `#optional` \\ -e KEYBOARD=en-us-qwerty `#optional` \\ -p 3000:3000 \\ -v /path/to/config:/config \\ --device /dev/dri:/dev/dri `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/blender:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 3000 Blender desktop gui Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London SUBFOLDER=/ Specify a subfolder to use with reverse proxies, IE /subfolder/ KEYBOARD=en-us-qwerty See the keyboard layouts section for more information and options. Volume Mappings ( -v ) Volume Function /config Users home directory in the container, stores local files and settings Device Mappings ( --device ) Parameter Function /dev/dri Add this for hardware acceleration (Linux hosts only) Miscellaneous Options Parameter Function --security-opt seccomp=unconfined For Docker Engine only, this may be required depending on your Docker and storage configuration. Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it blender /bin/bash To monitor the logs of the container in realtime: docker logs -f blender Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' blender Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/blender:latest Versions 06.05.22: - Use the full semver version in image tags. Arm32/64 version tags are inaccurate due to installing from ubuntu repo, which is usually behind. 12.03.22: - Initial Release.","title":"blender"},{"location":"images/docker-blender/#linuxserverblender","text":"Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality, and computer games. This image does not support GPU rendering out of the box only accelerated workspace experience","title":"linuxserver/blender"},{"location":"images/docker-blender/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/blender:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-blender/#application-setup","text":"The application can be accessed at: * http://yourhost:3000/ By default the user/pass is abc/abc, if you change your password or want to login manually to the GUI session for any reason use the following link: http://yourhost:3000/?login=true You can also force login on the '/' path without this parameter by passing the environment variable -e AUTO_LOGIN=false .","title":"Application Setup"},{"location":"images/docker-blender/#hardware-acceleration","text":"This only applies to your desktop experience, this container is capable of supporting accelerated rendering with /dev/dri mounted in, but the AMD HIP and Nvidia CUDA runtimes are massive which are not installed by default in this container.","title":"Hardware Acceleration"},{"location":"images/docker-blender/#intelatiamd","text":"To leverage hardware acceleration you will need to mount /dev/dri video device inside of the conainer. --device=/dev/dri:/dev/dri We will automatically ensure the abc user inside of the container has the proper permissions to access this device.","title":"Intel/ATI/AMD"},{"location":"images/docker-blender/#nvidia","text":"Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: https://github.com/NVIDIA/nvidia-docker We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all (can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv ). NVIDIA automatically mounts the GPU and drivers from your host into the container.","title":"Nvidia"},{"location":"images/docker-blender/#arm-devices","text":"Arm devices can run this image, but generally should not mount in /dev/dri. The OpenGL ES version is not high enough to run Blender. The program can run on these platforms though, leveraging CPU LLVMPipe rendering. Due to lack of arm32/64 binaries from the upstream project, our arm32/64 images install the latest version from the ubuntu repo, which is usually behind and thus the version the image is tagged with does not match the version contained.","title":"Arm Devices"},{"location":"images/docker-blender/#keyboard-layouts","text":"This should match the layout on the computer you are accessing the container from. The keyboard layouts available for use are: * da-dk-qwerty- Danish keyboard * de-ch-qwertz- Swiss German keyboard (qwertz) * de-de-qwertz- German keyboard (qwertz) - OSK available * en-gb-qwerty- English (UK) keyboard * en-us-qwerty- English (US) keyboard - OSK available DEFAULT * es-es-qwerty- Spanish keyboard - OSK available * fr-ch-qwertz- Swiss French keyboard (qwertz) * fr-fr-azerty- French keyboard (azerty) - OSK available * it-it-qwerty- Italian keyboard - OSK available * ja-jp-qwerty- Japanese keyboard * pt-br-qwerty- Portuguese Brazilian keyboard * sv-se-qwerty- Swedish keyboard * tr-tr-qwerty- Turkish-Q keyboard","title":"Keyboard Layouts"},{"location":"images/docker-blender/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-blender/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: blender: image: lscr.io/linuxserver/blender:latest container_name: blender security_opt: - seccomp:unconfined #optional environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - SUBFOLDER=/ #optional - KEYBOARD=en-us-qwerty #optional volumes: - /path/to/config:/config ports: - 3000:3000 devices: - /dev/dri:/dev/dri #optional restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-blender/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=blender \\ --security-opt seccomp=unconfined `#optional` \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e SUBFOLDER=/ `#optional` \\ -e KEYBOARD=en-us-qwerty `#optional` \\ -p 3000:3000 \\ -v /path/to/config:/config \\ --device /dev/dri:/dev/dri `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/blender:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-blender/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-blender/#ports-p","text":"Parameter Function 3000 Blender desktop gui","title":"Ports (-p)"},{"location":"images/docker-blender/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London SUBFOLDER=/ Specify a subfolder to use with reverse proxies, IE /subfolder/ KEYBOARD=en-us-qwerty See the keyboard layouts section for more information and options.","title":"Environment Variables (-e)"},{"location":"images/docker-blender/#volume-mappings-v","text":"Volume Function /config Users home directory in the container, stores local files and settings","title":"Volume Mappings (-v)"},{"location":"images/docker-blender/#device-mappings-device","text":"Parameter Function /dev/dri Add this for hardware acceleration (Linux hosts only)","title":"Device Mappings (--device)"},{"location":"images/docker-blender/#miscellaneous-options","text":"Parameter Function --security-opt seccomp=unconfined For Docker Engine only, this may be required depending on your Docker and storage configuration.","title":"Miscellaneous Options"},{"location":"images/docker-blender/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-blender/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-blender/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-blender/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-blender/#support-info","text":"Shell access whilst the container is running: docker exec -it blender /bin/bash To monitor the logs of the container in realtime: docker logs -f blender Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' blender Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/blender:latest","title":"Support Info"},{"location":"images/docker-blender/#versions","text":"06.05.22: - Use the full semver version in image tags. Arm32/64 version tags are inaccurate due to installing from ubuntu repo, which is usually behind. 12.03.22: - Initial Release.","title":"Versions"},{"location":"images/docker-boinc/","text":"linuxserver/boinc BOINC is a platform for high-throughput computing on a large scale (thousands or millions of computers). It can be used for volunteer computing (using consumer devices) or grid computing (using organizational resources). It supports virtualized, parallel, and GPU-based applications. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/boinc:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Application Setup This image sets up the BOINC client and manager and makes its interface available via Guacamole server in the browser. The interface is available at http://your-ip:8080 . By default, there is no password set for the main gui. Optional environment variable PASSWORD will allow setting a password for the user abc . You can access advanced features of the Guacamole remote desktop using ctrl + alt + shift enabling you to use remote copy/paste and different languages. It is recommended to switch to Advanced View in the top menu, because the Computing Preferences don't seem to be displayed in Simple View . Sometimes, the pop-up windows may open in a tiny box in the upper left corner of the screen. When that happens, you can find the corner and resize them. GPU Hardware Acceleration Intel Hardware acceleration users for Intel Quicksync will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container: --device=/dev/dri:/dev/dri We will automatically ensure the abc user inside of the container has the proper permissions to access this device. Nvidia Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: https://github.com/NVIDIA/nvidia-docker We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all (can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv ). NVIDIA automatically mounts the GPU and drivers from your host into the BOINC docker container. Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: boinc: image: lscr.io/linuxserver/boinc:latest container_name: boinc security_opt: - seccomp:unconfined #optional environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - PASSWORD= #optional volumes: - /path/to/data:/config ports: - 8080:8080 devices: - /dev/dri:/dev/dri #optional restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=boinc \\ --security-opt seccomp=unconfined `#optional` \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e PASSWORD= `#optional` \\ -p 8080:8080 \\ -v /path/to/data:/config \\ --device /dev/dri:/dev/dri `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/boinc:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 8080 Boinc desktop gui. Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. PASSWORD= Optionally set a password for the gui. Volume Mappings ( -v ) Volume Function /config Where BOINC should store its database and config. Device Mappings ( --device ) Parameter Function /dev/dri Only needed if you want to use your Intel GPU (vaapi). Miscellaneous Options Parameter Function --security-opt seccomp=unconfined For Docker Engine only, many modern gui apps need this to function as syscalls are unkown to Docker. Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it boinc /bin/bash To monitor the logs of the container in realtime: docker logs -f boinc Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' boinc Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/boinc:latest Versions 14.11.22: - Fix opencl driver. 18.09.22: - Rebase to jammy. 24.02.22: - Rebase to focal. 31.01.22: - Improve device permissions setting verbosity. 23.03.21: - Rebase to rdesktop-web baseimage. Deprecate GUAC_USER and GUAC_PASS env vars. Existing users can set the new var PASSWORD for the user abc . 01.04.20: - Install boinc from ppa. 17.03.20: - Add armhf and aarch64 builds and switch to multi-arch image. 16.03.20: - Clean up old pid files. 15.03.20: - Initial release.","title":"boinc"},{"location":"images/docker-boinc/#linuxserverboinc","text":"BOINC is a platform for high-throughput computing on a large scale (thousands or millions of computers). It can be used for volunteer computing (using consumer devices) or grid computing (using organizational resources). It supports virtualized, parallel, and GPU-based applications.","title":"linuxserver/boinc"},{"location":"images/docker-boinc/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/boinc:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-boinc/#application-setup","text":"This image sets up the BOINC client and manager and makes its interface available via Guacamole server in the browser. The interface is available at http://your-ip:8080 . By default, there is no password set for the main gui. Optional environment variable PASSWORD will allow setting a password for the user abc . You can access advanced features of the Guacamole remote desktop using ctrl + alt + shift enabling you to use remote copy/paste and different languages. It is recommended to switch to Advanced View in the top menu, because the Computing Preferences don't seem to be displayed in Simple View . Sometimes, the pop-up windows may open in a tiny box in the upper left corner of the screen. When that happens, you can find the corner and resize them.","title":"Application Setup"},{"location":"images/docker-boinc/#gpu-hardware-acceleration","text":"","title":"GPU Hardware Acceleration"},{"location":"images/docker-boinc/#intel","text":"Hardware acceleration users for Intel Quicksync will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container: --device=/dev/dri:/dev/dri We will automatically ensure the abc user inside of the container has the proper permissions to access this device.","title":"Intel"},{"location":"images/docker-boinc/#nvidia","text":"Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: https://github.com/NVIDIA/nvidia-docker We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all (can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv ). NVIDIA automatically mounts the GPU and drivers from your host into the BOINC docker container.","title":"Nvidia"},{"location":"images/docker-boinc/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-boinc/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: boinc: image: lscr.io/linuxserver/boinc:latest container_name: boinc security_opt: - seccomp:unconfined #optional environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - PASSWORD= #optional volumes: - /path/to/data:/config ports: - 8080:8080 devices: - /dev/dri:/dev/dri #optional restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-boinc/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=boinc \\ --security-opt seccomp=unconfined `#optional` \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e PASSWORD= `#optional` \\ -p 8080:8080 \\ -v /path/to/data:/config \\ --device /dev/dri:/dev/dri `#optional` \\ --restart unless-stopped \\ lscr.io/linuxserver/boinc:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-boinc/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-boinc/#ports-p","text":"Parameter Function 8080 Boinc desktop gui.","title":"Ports (-p)"},{"location":"images/docker-boinc/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. PASSWORD= Optionally set a password for the gui.","title":"Environment Variables (-e)"},{"location":"images/docker-boinc/#volume-mappings-v","text":"Volume Function /config Where BOINC should store its database and config.","title":"Volume Mappings (-v)"},{"location":"images/docker-boinc/#device-mappings-device","text":"Parameter Function /dev/dri Only needed if you want to use your Intel GPU (vaapi).","title":"Device Mappings (--device)"},{"location":"images/docker-boinc/#miscellaneous-options","text":"Parameter Function --security-opt seccomp=unconfined For Docker Engine only, many modern gui apps need this to function as syscalls are unkown to Docker.","title":"Miscellaneous Options"},{"location":"images/docker-boinc/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-boinc/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-boinc/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-boinc/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-boinc/#support-info","text":"Shell access whilst the container is running: docker exec -it boinc /bin/bash To monitor the logs of the container in realtime: docker logs -f boinc Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' boinc Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/boinc:latest","title":"Support Info"},{"location":"images/docker-boinc/#versions","text":"14.11.22: - Fix opencl driver. 18.09.22: - Rebase to jammy. 24.02.22: - Rebase to focal. 31.01.22: - Improve device permissions setting verbosity. 23.03.21: - Rebase to rdesktop-web baseimage. Deprecate GUAC_USER and GUAC_PASS env vars. Existing users can set the new var PASSWORD for the user abc . 01.04.20: - Install boinc from ppa. 17.03.20: - Add armhf and aarch64 builds and switch to multi-arch image. 16.03.20: - Clean up old pid files. 15.03.20: - Initial release.","title":"Versions"},{"location":"images/docker-booksonic-air/","text":"linuxserver/booksonic-air Booksonic-air is a platform for accessing the audiobooks you own wherever you are. At the moment the platform consists of: * Booksonic Air - A server for streaming your audiobooks, successor to the original Booksonic server and based on Airsonic. * Booksonic App - An DSub based Android app for connection to Booksonic-Air servers. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/booksonic-air:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Version Tags This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Stable booksonic-air releases Application Setup Whilst this is a more up to date rebase of the original Booksonic server, upgrading in place is not supported and a fresh install has been recommended. Default user/pass is admin/admin Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: booksonic-air: image: lscr.io/linuxserver/booksonic-air:latest container_name: booksonic-air environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CONTEXT_PATH=url-base volumes: - :/config - :/audiobooks - :/podcasts - :/othermedia ports: - 4040:4040 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=booksonic-air \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CONTEXT_PATH=url-base \\ -p 4040:4040 \\ -v :/config \\ -v :/audiobooks \\ -v :/podcasts \\ -v :/othermedia \\ --restart unless-stopped \\ lscr.io/linuxserver/booksonic-air:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 4040 Application WebUI Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. CONTEXT_PATH=url-base Base url for use with reverse proxies etc. Volume Mappings ( -v ) Volume Function /config Configuration files. /audiobooks Audiobooks. /podcasts Podcasts. /othermedia Other media. Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it booksonic-air /bin/bash To monitor the logs of the container in realtime: docker logs -f booksonic-air Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' booksonic-air Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/booksonic-air:latest Versions 18.04.22: - Rebase to Alpine 3.15. 15.09.20: - Initial Release.","title":"booksonic-air"},{"location":"images/docker-booksonic-air/#linuxserverbooksonic-air","text":"Booksonic-air is a platform for accessing the audiobooks you own wherever you are. At the moment the platform consists of: * Booksonic Air - A server for streaming your audiobooks, successor to the original Booksonic server and based on Airsonic. * Booksonic App - An DSub based Android app for connection to Booksonic-Air servers.","title":"linuxserver/booksonic-air"},{"location":"images/docker-booksonic-air/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/booksonic-air:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-booksonic-air/#version-tags","text":"This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags. Tag Available Description latest \u2705 Stable booksonic-air releases","title":"Version Tags"},{"location":"images/docker-booksonic-air/#application-setup","text":"Whilst this is a more up to date rebase of the original Booksonic server, upgrading in place is not supported and a fresh install has been recommended. Default user/pass is admin/admin","title":"Application Setup"},{"location":"images/docker-booksonic-air/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-booksonic-air/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: booksonic-air: image: lscr.io/linuxserver/booksonic-air:latest container_name: booksonic-air environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CONTEXT_PATH=url-base volumes: - :/config - :/audiobooks - :/podcasts - :/othermedia ports: - 4040:4040 restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-booksonic-air/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=booksonic-air \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CONTEXT_PATH=url-base \\ -p 4040:4040 \\ -v :/config \\ -v :/audiobooks \\ -v :/podcasts \\ -v :/othermedia \\ --restart unless-stopped \\ lscr.io/linuxserver/booksonic-air:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-booksonic-air/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-booksonic-air/#ports-p","text":"Parameter Function 4040 Application WebUI","title":"Ports (-p)"},{"location":"images/docker-booksonic-air/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. CONTEXT_PATH=url-base Base url for use with reverse proxies etc.","title":"Environment Variables (-e)"},{"location":"images/docker-booksonic-air/#volume-mappings-v","text":"Volume Function /config Configuration files. /audiobooks Audiobooks. /podcasts Podcasts. /othermedia Other media.","title":"Volume Mappings (-v)"},{"location":"images/docker-booksonic-air/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-booksonic-air/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-booksonic-air/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-booksonic-air/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-booksonic-air/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-booksonic-air/#support-info","text":"Shell access whilst the container is running: docker exec -it booksonic-air /bin/bash To monitor the logs of the container in realtime: docker logs -f booksonic-air Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' booksonic-air Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/booksonic-air:latest","title":"Support Info"},{"location":"images/docker-booksonic-air/#versions","text":"18.04.22: - Rebase to Alpine 3.15. 15.09.20: - Initial Release.","title":"Versions"},{"location":"images/docker-booksonic/","text":"DEPRECATION NOTICE This image is deprecated. We will not offer support for this image and it will not be updated. Please migrate to https://github.com/linuxserver/docker-booksonic-air linuxserver/booksonic Booksonic is a server and an app for streaming your audiobooks to any pc or android phone. Most of the functionality is also availiable on other platforms that have apps for subsonic. Supported Architectures Our images support multiple architectures such as x86-64 , arm64 and armhf . We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/booksonic should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Tag x86-64 amd64-latest arm64 arm64v8-latest armhf arm32v7-latest Version Tags This image provides various versions that are available via tags. latest tag usually provides the latest stable version. Others are considered under development and caution must be exercised when using them. Tag Description latest Stable Booksonic releases prerelease Booksonic Pre-releases Application Setup Default user/pass is admin/admin Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: booksonic: image: lscr.io/linuxserver/booksonic container_name: booksonic environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CONTEXT_PATH=url-base volumes: - :/config - :/audiobooks - :/podcasts - :/othermedia ports: - 4040:4040 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=booksonic \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CONTEXT_PATH=url-base \\ -p 4040:4040 \\ -v :/config \\ -v :/audiobooks \\ -v :/podcasts \\ -v :/othermedia \\ --restart unless-stopped \\ lscr.io/linuxserver/booksonic Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 4040 Application WebUI Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. CONTEXT_PATH=url-base Base url for use with reverse proxies etc. Volume Mappings ( -v ) Volume Function /config Configuration files. /audiobooks Audiobooks. /podcasts Podcasts. /othermedia Other media. Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it booksonic /bin/bash To monitor the logs of the container in realtime: docker logs -f booksonic Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' booksonic Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/booksonic Versions 06.05.21: - This image is now deprecated. Please migrate to https://github.com/linuxserver/docker-booksonic-air 11.08.20: - Changed upstream github repo location 22.12.19: - Revert to pulling in external war, upgrade jetty. 30.04.19: - Switching to build war from source, use stable booksonic releases. 24.03.19: - Switching to new Base images, shift to arm32v7 tag. 16.01.19: - Adding pipeline logic and multi arch. 05.01.19: - Linting fixes. 27.08.18: - Rebase to ubuntu bionic. 06.12.17: - Rebase to alpine 3.7. 11.07.17: - Rebase to alpine 3.6. 07.02.17: - Rebase to alpine 3.5. 13.12.16: - Initial Release.","title":"booksonic"},{"location":"images/docker-booksonic/#deprecation-notice","text":"This image is deprecated. We will not offer support for this image and it will not be updated. Please migrate to https://github.com/linuxserver/docker-booksonic-air","title":"DEPRECATION NOTICE"},{"location":"images/docker-booksonic/#linuxserverbooksonic","text":"Booksonic is a server and an app for streaming your audiobooks to any pc or android phone. Most of the functionality is also availiable on other platforms that have apps for subsonic.","title":"linuxserver/booksonic"},{"location":"images/docker-booksonic/#supported-architectures","text":"Our images support multiple architectures such as x86-64 , arm64 and armhf . We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/booksonic should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Tag x86-64 amd64-latest arm64 arm64v8-latest armhf arm32v7-latest","title":"Supported Architectures"},{"location":"images/docker-booksonic/#version-tags","text":"This image provides various versions that are available via tags. latest tag usually provides the latest stable version. Others are considered under development and caution must be exercised when using them. Tag Description latest Stable Booksonic releases prerelease Booksonic Pre-releases","title":"Version Tags"},{"location":"images/docker-booksonic/#application-setup","text":"Default user/pass is admin/admin","title":"Application Setup"},{"location":"images/docker-booksonic/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-booksonic/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2.1\" services: booksonic: image: lscr.io/linuxserver/booksonic container_name: booksonic environment: - PUID=1000 - PGID=1000 - TZ=Europe/London - CONTEXT_PATH=url-base volumes: - :/config - :/audiobooks - :/podcasts - :/othermedia ports: - 4040:4040 restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-booksonic/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=booksonic \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=Europe/London \\ -e CONTEXT_PATH=url-base \\ -p 4040:4040 \\ -v :/config \\ -v :/audiobooks \\ -v :/podcasts \\ -v :/othermedia \\ --restart unless-stopped \\ lscr.io/linuxserver/booksonic","title":"docker cli (click here for more info)"},{"location":"images/docker-booksonic/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-booksonic/#ports-p","text":"Parameter Function 4040 Application WebUI","title":"Ports (-p)"},{"location":"images/docker-booksonic/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=Europe/London Specify a timezone to use EG Europe/London. CONTEXT_PATH=url-base Base url for use with reverse proxies etc.","title":"Environment Variables (-e)"},{"location":"images/docker-booksonic/#volume-mappings-v","text":"Volume Function /config Configuration files. /audiobooks Audiobooks. /podcasts Podcasts. /othermedia Other media.","title":"Volume Mappings (-v)"},{"location":"images/docker-booksonic/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-booksonic/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-booksonic/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-booksonic/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-booksonic/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-booksonic/#support-info","text":"Shell access whilst the container is running: docker exec -it booksonic /bin/bash To monitor the logs of the container in realtime: docker logs -f booksonic Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' booksonic Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/booksonic","title":"Support Info"},{"location":"images/docker-booksonic/#versions","text":"06.05.21: - This image is now deprecated. Please migrate to https://github.com/linuxserver/docker-booksonic-air 11.08.20: - Changed upstream github repo location 22.12.19: - Revert to pulling in external war, upgrade jetty. 30.04.19: - Switching to build war from source, use stable booksonic releases. 24.03.19: - Switching to new Base images, shift to arm32v7 tag. 16.01.19: - Adding pipeline logic and multi arch. 05.01.19: - Linting fixes. 27.08.18: - Rebase to ubuntu bionic. 06.12.17: - Rebase to alpine 3.7. 11.07.17: - Rebase to alpine 3.6. 07.02.17: - Rebase to alpine 3.5. 13.12.16: - Initial Release.","title":"Versions"},{"location":"images/docker-bookstack/","text":"linuxserver/bookstack Bookstack is a free and open source Wiki designed for creating beautiful documentation. Featuring a simple, but powerful WYSIWYG editor it allows for teams to create detailed and useful documentation with ease. Powered by SQL and including a Markdown editor for those who prefer it, BookStack is geared towards making documentation more of a pleasure than a chore. For more information on BookStack visit their website and check it out: https://www.bookstackapp.com Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/bookstack:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Application Setup The default username is admin@admin.com with the password of password , access the container at http://dockerhost:6875. This application is dependent on a MySQL database be it one you already have or a new one. If you do not already have one, set up our MariaDB container here https://hub.docker.com/r/linuxserver/mariadb/. If you intend to use this application behind a subfolder reverse proxy, such as our SWAG container or Traefik you will need to make sure that the APP_URL environment variable is set to your external domain, or it will not work Documentation for BookStack can be found at https://www.bookstackapp.com/docs/ Advanced Users (full control over the .env file) If you wish to use the extra functionality of BookStack such as email, Memcache, LDAP and so on you will need to make your own .env file with guidance from the BookStack documentation. When you create the container, do not set any arguments for any SQL settings. The container will copy an exemplary .env file to /config/www/.env on your host system for you to edit. PDF Rendering wkhtmltopdf is available to use as an alternative PDF rendering generator as described at https://www.bookstackapp.com/docs/admin/pdf-rendering/. The path to wkhtmltopdf in this image to include in your .env file is /usr/bin/wkhtmltopdf . Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2\" services: bookstack: image: lscr.io/linuxserver/bookstack container_name: bookstack environment: - PUID=1000 - PGID=1000 - APP_URL= - DB_HOST=bookstack_db - DB_USER=bookstack - DB_PASS= - DB_DATABASE=bookstackapp volumes: - /path/to/data:/config ports: - 6875:80 restart: unless-stopped depends_on: - bookstack_db bookstack_db: image: lscr.io/linuxserver/mariadb container_name: bookstack_db environment: - PUID=1000 - PGID=1000 - MYSQL_ROOT_PASSWORD= - TZ=Europe/London - MYSQL_DATABASE=bookstackapp - MYSQL_USER=bookstack - MYSQL_PASSWORD= volumes: - /path/to/data:/config restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=bookstack \\ -e PUID=1000 \\ -e PGID=1000 \\ -e APP_URL= \\ -e DB_HOST= \\ -e DB_USER= \\ -e DB_PASS= \\ -e DB_DATABASE=bookstackapp \\ -p 6875:80 \\ -v /path/to/data:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/bookstack:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 80 will map the container's port 80 to port 6875 on the host Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation APP_URL= for specifying the IP:port or URL your application will be accessed on (ie. http://192.168.1.1:6875 or https://bookstack.mydomain.com DB_HOST= for specifying the database host DB_USER= for specifying the database user DB_PASS= for specifying the database password DB_DATABASE=bookstackapp for specifying the database to be used (non-alphanumeric passwords must be properly escaped.) Volume Mappings ( -v ) Volume Function /config this will store any uploaded data on the docker host Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it bookstack /bin/bash To monitor the logs of the container in realtime: docker logs -f bookstack Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' bookstack Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/bookstack:latest Versions 10.10.22: - Remove password escape logic which caused problems for a small subset of users. 20.08.22: - Rebasing to alpine 3.15 with php8. Restructure nginx configs ( see changes announcement ). 14.03.22: - Add symlinks for theme support. 11.07.21: - Rebase to Alpine 3.14. 12.01.21: - Remove unused requirement, as of release 0.31.0. 17.12.20: - Make APP_URL var required (upstream changes). 17.09.20: - Rebase to alpine 3.12. Fix APP_URL setting. Bump php post max and upload max filesizes to 100MB by default. 19.12.19: - Rebasing to alpine 3.11. 26.07.19: - Use old version of tidyhtml pending upstream fixes. 28.06.19: - Rebasing to alpine 3.10. 14.06.19: - Add wkhtmltopdf to image for PDF rendering. 20.04.19: - Rebase to Alpine 3.9, add MySQL init logic. 22.03.19: - Switching to new Base images, shift to arm32v7 tag. 20.01.19: - Added php7-curl 04.11.18: - Added php7-ldap 15.10.18: - Changed functionality for advanced users 08.10.18: - Advanced mode, symlink changes, sed fixing, docs updated, added some composer files 23.09.28: - Updates pre-release 02.07.18: - Initial Release.","title":"bookstack"},{"location":"images/docker-bookstack/#linuxserverbookstack","text":"Bookstack is a free and open source Wiki designed for creating beautiful documentation. Featuring a simple, but powerful WYSIWYG editor it allows for teams to create detailed and useful documentation with ease. Powered by SQL and including a Markdown editor for those who prefer it, BookStack is geared towards making documentation more of a pleasure than a chore. For more information on BookStack visit their website and check it out: https://www.bookstackapp.com","title":"linuxserver/bookstack"},{"location":"images/docker-bookstack/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/bookstack:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\","title":"Supported Architectures"},{"location":"images/docker-bookstack/#application-setup","text":"The default username is admin@admin.com with the password of password , access the container at http://dockerhost:6875. This application is dependent on a MySQL database be it one you already have or a new one. If you do not already have one, set up our MariaDB container here https://hub.docker.com/r/linuxserver/mariadb/. If you intend to use this application behind a subfolder reverse proxy, such as our SWAG container or Traefik you will need to make sure that the APP_URL environment variable is set to your external domain, or it will not work Documentation for BookStack can be found at https://www.bookstackapp.com/docs/","title":"Application Setup"},{"location":"images/docker-bookstack/#advanced-users-full-control-over-the-env-file","text":"If you wish to use the extra functionality of BookStack such as email, Memcache, LDAP and so on you will need to make your own .env file with guidance from the BookStack documentation. When you create the container, do not set any arguments for any SQL settings. The container will copy an exemplary .env file to /config/www/.env on your host system for you to edit.","title":"Advanced Users (full control over the .env file)"},{"location":"images/docker-bookstack/#pdf-rendering","text":"wkhtmltopdf is available to use as an alternative PDF rendering generator as described at https://www.bookstackapp.com/docs/admin/pdf-rendering/. The path to wkhtmltopdf in this image to include in your .env file is /usr/bin/wkhtmltopdf .","title":"PDF Rendering"},{"location":"images/docker-bookstack/#usage","text":"To help you get started creating a container from this image you can either use docker-compose or the docker cli.","title":"Usage"},{"location":"images/docker-bookstack/#docker-compose-recommended-click-here-for-more-info","text":"--- version: \"2\" services: bookstack: image: lscr.io/linuxserver/bookstack container_name: bookstack environment: - PUID=1000 - PGID=1000 - APP_URL= - DB_HOST=bookstack_db - DB_USER=bookstack - DB_PASS= - DB_DATABASE=bookstackapp volumes: - /path/to/data:/config ports: - 6875:80 restart: unless-stopped depends_on: - bookstack_db bookstack_db: image: lscr.io/linuxserver/mariadb container_name: bookstack_db environment: - PUID=1000 - PGID=1000 - MYSQL_ROOT_PASSWORD= - TZ=Europe/London - MYSQL_DATABASE=bookstackapp - MYSQL_USER=bookstack - MYSQL_PASSWORD= volumes: - /path/to/data:/config restart: unless-stopped","title":"docker-compose (recommended, click here for more info)"},{"location":"images/docker-bookstack/#docker-cli-click-here-for-more-info","text":"docker run -d \\ --name=bookstack \\ -e PUID=1000 \\ -e PGID=1000 \\ -e APP_URL= \\ -e DB_HOST= \\ -e DB_USER= \\ -e DB_PASS= \\ -e DB_DATABASE=bookstackapp \\ -p 6875:80 \\ -v /path/to/data:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/bookstack:latest","title":"docker cli (click here for more info)"},{"location":"images/docker-bookstack/#parameters","text":"Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.","title":"Parameters"},{"location":"images/docker-bookstack/#ports-p","text":"Parameter Function 80 will map the container's port 80 to port 6875 on the host","title":"Ports (-p)"},{"location":"images/docker-bookstack/#environment-variables-e","text":"Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation APP_URL= for specifying the IP:port or URL your application will be accessed on (ie. http://192.168.1.1:6875 or https://bookstack.mydomain.com DB_HOST= for specifying the database host DB_USER= for specifying the database user DB_PASS= for specifying the database password DB_DATABASE=bookstackapp for specifying the database to be used (non-alphanumeric passwords must be properly escaped.)","title":"Environment Variables (-e)"},{"location":"images/docker-bookstack/#volume-mappings-v","text":"Volume Function /config this will store any uploaded data on the docker host","title":"Volume Mappings (-v)"},{"location":"images/docker-bookstack/#miscellaneous-options","text":"Parameter Function","title":"Miscellaneous Options"},{"location":"images/docker-bookstack/#environment-variables-from-files-docker-secrets","text":"You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file.","title":"Environment variables from files (Docker secrets)"},{"location":"images/docker-bookstack/#umask-for-running-applications","text":"For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.","title":"Umask for running applications"},{"location":"images/docker-bookstack/#user-group-identifiers","text":"When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)","title":"User / Group Identifiers"},{"location":"images/docker-bookstack/#docker-mods","text":"We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.","title":"Docker Mods"},{"location":"images/docker-bookstack/#support-info","text":"Shell access whilst the container is running: docker exec -it bookstack /bin/bash To monitor the logs of the container in realtime: docker logs -f bookstack Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' bookstack Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/bookstack:latest","title":"Support Info"},{"location":"images/docker-bookstack/#versions","text":"10.10.22: - Remove password escape logic which caused problems for a small subset of users. 20.08.22: - Rebasing to alpine 3.15 with php8. Restructure nginx configs ( see changes announcement ). 14.03.22: - Add symlinks for theme support. 11.07.21: - Rebase to Alpine 3.14. 12.01.21: - Remove unused requirement, as of release 0.31.0. 17.12.20: - Make APP_URL var required (upstream changes). 17.09.20: - Rebase to alpine 3.12. Fix APP_URL setting. Bump php post max and upload max filesizes to 100MB by default. 19.12.19: - Rebasing to alpine 3.11. 26.07.19: - Use old version of tidyhtml pending upstream fixes. 28.06.19: - Rebasing to alpine 3.10. 14.06.19: - Add wkhtmltopdf to image for PDF rendering. 20.04.19: - Rebase to Alpine 3.9, add MySQL init logic. 22.03.19: - Switching to new Base images, shift to arm32v7 tag. 20.01.19: - Added php7-curl 04.11.18: - Added php7-ldap 15.10.18: - Changed functionality for advanced users 08.10.18: - Advanced mode, symlink changes, sed fixing, docs updated, added some composer files 23.09.28: - Updates pre-release 02.07.18: - Initial Release.","title":"Versions"},{"location":"images/docker-budge/","text":"linuxserver/budge budge is an open source 'budgeting with envelopes' personal finance app. Supported Architectures We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/budge:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\ armhf \u2705 arm32v7-\\ Application Setup Access the web gui at http://SERVERIP:PORT Usage To help you get started creating a container from this image you can either use docker-compose or the docker cli. docker-compose (recommended, click here for more info ) --- version: \"2.1\" services: budge: image: lscr.io/linuxserver/budge:latest container_name: budge environment: - PUID=1000 - PGID=1000 - TZ=America/New_York volumes: - /path/to/budge/config:/config ports: - 80:80 - 443:443 restart: unless-stopped docker cli ( click here for more info ) docker run -d \\ --name=budge \\ -e PUID=1000 \\ -e PGID=1000 \\ -e TZ=America/New_York \\ -p 80:80 \\ -p 443:443 \\ -v /path/to/budge/config:/config \\ --restart unless-stopped \\ lscr.io/linuxserver/budge:latest Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate : respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container. Ports ( -p ) Parameter Function 80 http gui 443 https gui Environment Variables ( -e ) Env Function PUID=1000 for UserID - see below for explanation PGID=1000 for GroupID - see below for explanation TZ=America/New_York Specify a timezone to use EG America/New_York Volume Mappings ( -v ) Volume Function /config Persistent config files Miscellaneous Options Parameter Function Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend FILE__ . As an example: -e FILE__PASSWORD=/run/secrets/mysecretpassword Will set the environment variable PASSWORD based on the contents of the /run/secrets/mysecretpassword file. Umask for running applications For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support. User / Group Identifiers When using volumes ( -v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID . Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic. In this instance PUID=1000 and PGID=1000 , to find yours use id user as below: $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) Docker Mods We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. Support Info Shell access whilst the container is running: docker exec -it budge /bin/bash To monitor the logs of the container in realtime: docker logs -f budge Container version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' budge Image version number docker inspect -f '{{ index .Config.Labels \"build_version\" }}' lscr.io/linuxserver/budge:latest Versions 04.15.22: - Added NPM command to run db migrations. 02.05.22: - Initial Release.","title":"budge"},{"location":"images/docker-budge/#linuxserverbudge","text":"budge is an open source 'budgeting with envelopes' personal finance app.","title":"linuxserver/budge"},{"location":"images/docker-budge/#supported-architectures","text":"We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here . Simply pulling lscr.io/linuxserver/budge:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags. The architectures supported by this image are: Architecture Available Tag x86-64 \u2705 amd64-\\ arm64 \u2705 arm64v8-\\