From e748c8f208ae4b8fbe6806d2955b7cbbf670796c Mon Sep 17 00:00:00 2001 From: <> Date: Thu, 5 Dec 2024 21:53:44 +0000 Subject: [PATCH] Deployed dd6e0be99 with MkDocs version: 1.6.1 --- images/docker-emby/index.html | 6 +++--- images/docker-msedge/index.html | 6 +++--- search/search_index.json | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/images/docker-emby/index.html b/images/docker-emby/index.html index 4983329068..fcdff47113 100644 --- a/images/docker-emby/index.html +++ b/images/docker-emby/index.html @@ -5,7 +5,7 @@ --device=/dev/video11:/dev/video11 --device=/dev/video12:/dev/video12
Many desktop applications need access to a GPU to function properly and even some Desktop Environments have compositor effects that will not function without a GPU. However this is not a hard requirement and all base images will function without a video device mounted into the container.
To leverage hardware acceleration you will need to mount /dev/dri video device inside of the container.
We will automatically ensure the abc user inside of the container has the proper permissions to access this device.
Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: https://github.com/NVIDIA/nvidia-container-toolkit
We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-container-toolkit is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia
and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all
(can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv
). NVIDIA automatically mounts the GPU and drivers from your host into the container.
Best effort is made to install tools to allow mounting in /dev/dri on Arm devices. In most cases if /dev/dri exists on the host it should just work. If running a Raspberry Pi 4 be sure to enable dtoverlay=vc4-fkms-v3d
in your usercfg.txt.
To help you get started creating a container from this image you can either use docker-compose or the docker cli.
We will automatically ensure the abc user inside of the container has the proper permissions to access this device.
Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: https://github.com/NVIDIA/nvidia-container-toolkit
We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-container-toolkit is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia
and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all
(can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv
). NVIDIA automatically mounts the GPU and drivers from your host into the container.
Best effort is made to install tools to allow mounting in /dev/dri on Arm devices. In most cases if /dev/dri exists on the host it should just work. If running a Raspberry Pi 4 be sure to enable dtoverlay=vc4-fkms-v3d
in your usercfg.txt.
To help you get started creating a container from this image you can either use docker-compose or the docker cli.
Info
Unless a parameter is flaged as 'optional', it is mandatory and a value must be provided.
---
services:
emby:
image: lscr.io/linuxserver/emby:latest
@@ -47,7 +47,7 @@
--device /dev/video12:/dev/video12 `#optional` \
--restart unless-stopped \
lscr.io/linuxserver/emby:latest
-
Containers are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate <external>:<internal>
respectively. For example, -p 8080:80
would expose port 80
from inside the container to be accessible from the host's IP on port 8080
outside the container.
-p
)¶Parameter | Function |
---|---|
8096 | Http webUI. |
8920 | Https webUI (you need to setup your own certificate). |
-e
)¶Env | Function |
---|---|
PUID=1000 | for UserID - see below for explanation |
PGID=1000 | for GroupID - see below for explanation |
TZ=Etc/UTC | specify a timezone to use, see this list. |
-v
)¶Volume | Function |
---|---|
/config | Emby data storage location. This can grow very large, 50gb+ is likely for a large collection. |
/data/tvshows | Media goes here. Add as many as needed e.g. /data/movies , /data/tv , etc. |
/data/movies | Media goes here. Add as many as needed e.g. /data/movies , /data/tv , etc. |
/opt/vc/lib | Path for Raspberry Pi OpenMAX libs optional. |
--device
)¶Parameter | Function |
---|---|
/dev/dri | Only needed if you want to use your Intel or AMD GPU for hardware accelerated video encoding (vaapi). |
/dev/vchiq | Only needed if you want to use your Raspberry Pi OpenMax video encoding (Bellagio). |
/dev/video10 | Only needed if you want to use your Raspberry Pi V4L2 video encoding. |
/dev/video11 | Only needed if you want to use your Raspberry Pi V4L2 video encoding. |
/dev/video12 | Only needed if you want to use your Raspberry Pi V4L2 video encoding. |
Parameter | Function |
---|---|
You can set any environment variable from a file by using a special prepend FILE__
.
As an example:
Containers are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate <external>:<internal>
respectively. For example, -p 8080:80
would expose port 80
from inside the container to be accessible from the host's IP on port 8080
outside the container.
-p
)¶Parameter | Function |
---|---|
8096:8096 | Http webUI. |
8920:8920 | Https webUI (you need to setup your own certificate). |
-e
)¶Env | Function |
---|---|
PUID=1000 | for UserID - see below for explanation |
PGID=1000 | for GroupID - see below for explanation |
TZ=Etc/UTC | specify a timezone to use, see this list. |
-v
)¶Volume | Function |
---|---|
/config | Emby data storage location. This can grow very large, 50gb+ is likely for a large collection. |
/data/tvshows | Media goes here. Add as many as needed e.g. /data/movies , /data/tv , etc. |
/data/movies | Media goes here. Add as many as needed e.g. /data/movies , /data/tv , etc. |
/opt/vc/lib | Path for Raspberry Pi OpenMAX libs optional. |
--device
)¶Parameter | Function |
---|---|
/dev/dri | Only needed if you want to use your Intel or AMD GPU for hardware accelerated video encoding (vaapi). |
/dev/vchiq | Only needed if you want to use your Raspberry Pi OpenMax video encoding (Bellagio). |
/dev/video10 | Only needed if you want to use your Raspberry Pi V4L2 video encoding. |
/dev/video11 | Only needed if you want to use your Raspberry Pi V4L2 video encoding. |
/dev/video12 | Only needed if you want to use your Raspberry Pi V4L2 video encoding. |
Parameter | Function |
---|---|
You can set any environment variable from a file by using a special prepend FILE__
.
As an example:
Will set the environment variable MYVAR
based on the contents of the /run/secrets/mysecretvariable
file.
For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022
setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.
When using volumes (-v
flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID
and group PGID
.
Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic.
In this instance PUID=1000
and PGID=1000
, to find yours use id your_user
as below:
Example output:
We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.
Shell access whilst the container is running:
The ARM variants can be built on x86_64 hardware and vice versa using lscr.io/linuxserver/qemu-static
Once registered you can define the dockerfile to use with -f Dockerfile.aarch64
.
UMASK_SET
in favor of UMASK in baseimage, see above for more information. Remove no longer used mapping for /transcode.