From 530796fe28291b8f4b768adbc526f38e2496a0e7 Mon Sep 17 00:00:00 2001 From: <> Date: Wed, 5 Jul 2023 19:54:34 +0000 Subject: [PATCH] Deployed a52aa61f with MkDocs version: 1.4.3 --- images/docker-jellyfin/index.html | 2 +- sitemap.xml.gz | Bin 1649 -> 1649 bytes 2 files changed, 1 insertion(+), 1 deletion(-) diff --git a/images/docker-jellyfin/index.html b/images/docker-jellyfin/index.html index b0a63bda60..d54864996d 100644 --- a/images/docker-jellyfin/index.html +++ b/images/docker-jellyfin/index.html @@ -1,4 +1,4 @@ - jellyfin - LinuxServer.io
Skip to content

linuxserver/jellyfin

Scarf.io pulls GitHub Stars GitHub Release GitHub Package Repository GitLab Container Registry Quay.io Docker Pulls Docker Stars Jenkins Build LSIO CI

Jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps. Jellyfin is descended from Emby's 3.5.2 release and ported to the .NET Core framework to enable full cross-platform support. There are no strings attached, no premium licenses or features, and no hidden agendas: just a team who want to build something better and work together to achieve it.

Supported Architectures

We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here.

Simply pulling lscr.io/linuxserver/jellyfin:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags.

The architectures supported by this image are:

Architecture Available Tag
x86-64 amd64-\<version tag>
arm64 arm64v8-\<version tag>
armhf

Version Tags

This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags.

Tag Available Description
latest Stable Jellyfin releases
nightly Nightly Jellyfin releases
## Application Setup

Webui can be found at http://<your-ip>:8096

More information can be found on the official documentation here.

Hardware Acceleration

Intel

Hardware acceleration users for Intel Quicksync will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container:

--device=/dev/dri:/dev/dri

We will automatically ensure the abc user inside of the container has the proper permissions to access this device.

To enable the OpenCL based DV, HDR10 and HLG tone-mapping, please refer to the OpenCL-Intel mod from here:

https://mods.linuxserver.io/?mod=jellyfin

Nvidia

Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here:

https://github.com/NVIDIA/nvidia-docker

We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all (can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv ). NVIDIA automatically mounts the GPU and drivers from your host into the jellyfin docker container.

OpenMAX (Raspberry Pi)

Hardware acceleration users for Raspberry Pi MMAL/OpenMAX will need to mount their /dev/vcsm and /dev/vchiq video devices inside of the container and their system OpenMax libs by passing the following options when running or creating the container:

--device=/dev/vcsm:/dev/vcsm
+ jellyfin - LinuxServer.io       

linuxserver/jellyfin

Scarf.io pulls GitHub Stars GitHub Release GitHub Package Repository GitLab Container Registry Quay.io Docker Pulls Docker Stars Jenkins Build LSIO CI

Jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps. Jellyfin is descended from Emby's 3.5.2 release and ported to the .NET Core framework to enable full cross-platform support. There are no strings attached, no premium licenses or features, and no hidden agendas: just a team who want to build something better and work together to achieve it.

Supported Architectures

We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here.

Simply pulling lscr.io/linuxserver/jellyfin:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags.

The architectures supported by this image are:

Architecture Available Tag
x86-64 amd64-\<version tag>
arm64 arm64v8-\<version tag>
armhf

Version Tags

This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags.

Tag Available Description
latest Stable Jellyfin releases
nightly Nightly Jellyfin releases
## Application Setup

Webui can be found at http://<your-ip>:8096

More information can be found on the official documentation here.

Hardware Acceleration

Intel

Hardware acceleration users for Intel Quicksync will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container:

--device=/dev/dri:/dev/dri

We will automatically ensure the abc user inside of the container has the proper permissions to access this device.

To enable the OpenCL based DV, HDR10 and HLG tone-mapping, please refer to the OpenCL-Intel mod from here:

https://mods.linuxserver.io/?mod=jellyfin

Nvidia

Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here:

https://github.com/NVIDIA/nvidia-docker

We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime --runtime=nvidia and add an environment variable -e NVIDIA_VISIBLE_DEVICES=all (can also be set to a specific gpu's UUID, this can be discovered by running nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv ). NVIDIA automatically mounts the GPU and drivers from your host into the jellyfin docker container.

OpenMAX (Raspberry Pi)

Hardware acceleration users for Raspberry Pi MMAL/OpenMAX will need to mount their /dev/vcsm and /dev/vchiq video devices inside of the container and their system OpenMax libs by passing the following options when running or creating the container:

--device=/dev/vcsm:/dev/vcsm
 --device=/dev/vchiq:/dev/vchiq
 -v /opt/vc/lib:/opt/vc/lib
 

V4L2 (Raspberry Pi)

Hardware acceleration users for Raspberry Pi V4L2 will need to mount their /dev/video1X devices inside of the container by passing the following options when running or creating the container:

--device=/dev/video10:/dev/video10
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 0780e94839e0624dffe0a63b97a63d663032055b..041f0f2ed1852f050e2b01b40ceac5aac1021c40 100644
GIT binary patch
delta 15
Wcmey!^O1*5zMF%i=+s8GY&HNa#swb$

delta 15
Wcmey!^O1*5zMF$%&54a{*=ztV7zI}V