From 889a22aba4113eed9d0d482a1a32fb0c672a85ee Mon Sep 17 00:00:00 2001 From: <> Date: Thu, 22 Feb 2024 21:25:22 +0000 Subject: [PATCH] Deployed 0dbf5a42 with MkDocs version: 1.5.3 --- images/docker-emby/index.html | 6 +++--- search/search_index.json | 2 +- sitemap.xml.gz | Bin 1895 -> 1895 bytes 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/images/docker-emby/index.html b/images/docker-emby/index.html index 43c07831df..bf6c58ff3d 100644 --- a/images/docker-emby/index.html +++ b/images/docker-emby/index.html @@ -4,8 +4,8 @@
Hardware acceleration users for Raspberry Pi V4L2 will need to mount their /dev/video1X
devices inside of the container by passing the following options when running or creating the container:
--device=/dev/video10:/dev/video10
--device=/dev/video11:/dev/video11
--device=/dev/video12:/dev/video12
-
Many desktop application will need access to a GPU to function properly and even some Desktop Environments have compisitor effects that will not function without a GPU. This is not a hard requirement and all base images will function without a video device mounted into the container.
To leverage hardware acceleration you will need to mount /dev/dri video device inside of the container.
We will automatically ensure the abc user inside of the container has the proper permissions to access this device.
Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: https://github.com/NVIDIA/nvidia-docker
We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia
and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all
(can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv
). NVIDIA automatically mounts the GPU and drivers from your host into the container.
Best effort is made to install tools to allow mounting in /dev/dri on Arm devices. In most cases if /dev/dri exists on the host it should just work. If running a Raspberry Pi 4 be sure to enable dtoverlay=vc4-fkms-v3d
in your usercfg.txt.
To help you get started creating a container from this image you can either use docker-compose or the docker cli.
Many desktop applications need access to a GPU to function properly and even some Desktop Environments have compositor effects that will not function without a GPU. However this is not a hard requirement and all base images will function without a video device mounted into the container.
To leverage hardware acceleration you will need to mount /dev/dri video device inside of the container.
We will automatically ensure the abc user inside of the container has the proper permissions to access this device.
Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: https://github.com/NVIDIA/nvidia-container-toolkit
We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-container-toolkit is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia
and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all
(can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv
). NVIDIA automatically mounts the GPU and drivers from your host into the container.
Best effort is made to install tools to allow mounting in /dev/dri on Arm devices. In most cases if /dev/dri exists on the host it should just work. If running a Raspberry Pi 4 be sure to enable dtoverlay=vc4-fkms-v3d
in your usercfg.txt.
To help you get started creating a container from this image you can either use docker-compose or the docker cli.
---
services:
emby:
image: lscr.io/linuxserver/emby:latest
@@ -70,4 +70,4 @@
--pull \
-t lscr.io/linuxserver/emby:latest .
The ARM variants can be built on x86_64 hardware using multiarch/qemu-user-static
Once registered you can define the dockerfile to use with -f Dockerfile.aarch64
.
UMASK_SET
in favor of UMASK in baseimage, see above for more information. Remove no longer used mapping for /transcode.