From 23de326b61c61c6d5e0b77a2429806b8485cda3f Mon Sep 17 00:00:00 2001 From: LinuxServer-CI Date: Sun, 18 Apr 2021 10:00:34 +0000 Subject: [PATCH] Bot Updating Documentation --- images/docker-emby.md | 89 +++++++++++++++++++++---------------------- 1 file changed, 44 insertions(+), 45 deletions(-) diff --git a/images/docker-emby.md b/images/docker-emby.md index 85401afa0..e095fb67b 100644 --- a/images/docker-emby.md +++ b/images/docker-emby.md @@ -1,6 +1,9 @@ --- title: emby --- + + + # [linuxserver/emby](https://github.com/linuxserver/docker-emby) [![GitHub Stars](https://img.shields.io/github/stars/linuxserver/docker-emby.svg?color=94398d&labelColor=555555&logoColor=ffffff&style=for-the-badge&logo=github)](https://github.com/linuxserver/docker-emby) @@ -38,6 +41,41 @@ This image provides various versions that are available via tags. `latest` tag u | latest | Stable emby releases | | beta | Beta emby releases | +## Application Setup + +Webui can be found at `http://:8096` + +Emby has very complete and verbose documentation located [here](https://github.com/MediaBrowser/Wiki/wiki) . + +Hardware acceleration users for Intel Quicksync and AMD VAAPI will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container: + +```--device=/dev/dri:/dev/dri``` + +We will automatically ensure the abc user inside of the container has the proper permissions to access this device. + +Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: + +https://github.com/NVIDIA/nvidia-docker + +We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime `--runtime=nvidia` and add an environment variable `-e NVIDIA_VISIBLE_DEVICES=all` (can also be set to a specific gpu's UUID, this can be discovered by running `nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv` ). NVIDIA automatically mounts the GPU and drivers from your host into the emby docker. + +### OpenMAX (Raspberry Pi) + +Hardware acceleration users for Raspberry Pi OpenMAX will need to mount their /dev/vchiq video device inside of the container and their system OpenMax libs by passing the following options when running or creating the container: +``` +--device=/dev/vchiq:/dev/vchiq +-v /opt/vc/lib:/opt/vc/lib +``` + +### V4L2 (Raspberry Pi) + +Hardware acceleration users for Raspberry Pi V4L2 will need to mount their /dev/video1X devices inside of the container by passing the following options when running or creating the container: +``` +--device=/dev/video10:/dev/video10 +--device=/dev/video11:/dev/video11 +--device=/dev/video12:/dev/video12 +``` + ## Usage Here are some example snippets to help you get started creating a container from this image. @@ -76,7 +114,7 @@ services: ### docker cli -``` +```bash docker run -d \ --name=emby \ -e PUID=1000 \ @@ -97,7 +135,6 @@ docker run -d \ ghcr.io/linuxserver/emby ``` - ## Parameters Docker images are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate `:` respectively. For example, `-p 8080:80` would expose port `80` from inside the container to be accessible from the host's IP on port `8080` outside the container. @@ -109,7 +146,6 @@ Docker images are configured using parameters passed at runtime (such as those a | `8096` | Http webUI. | | `8920` | Https webUI (you need to setup your own certificate). | - ### Environment Variables (`-e`) | Env | Function | @@ -127,7 +163,8 @@ Docker images are configured using parameters passed at runtime (such as those a | `/data/movies` | Media goes here. Add as many as needed e.g. `/data/movies`, `/data/tv`, etc. | | `/opt/vc/lib` | Path for Raspberry Pi OpenMAX libs *optional*. | -#### Device Mappings (`--device`) +### Device Mappings (`--device`) + | Parameter | Function | | :-----: | --- | | `/dev/dri` | Only needed if you want to use your Intel or AMD GPU for hardware accelerated video encoding (vaapi). | @@ -136,14 +173,13 @@ Docker images are configured using parameters passed at runtime (such as those a | `/dev/video11` | Only needed if you want to use your Raspberry Pi V4L2 video encoding. | | `/dev/video12` | Only needed if you want to use your Raspberry Pi V4L2 video encoding. | - ## Environment variables from files (Docker secrets) You can set any environment variable from a file by using a special prepend `FILE__`. As an example: -``` +```bash -e FILE__PASSWORD=/run/secrets/mysecretpassword ``` @@ -154,7 +190,6 @@ Will set the environment variable `PASSWORD` based on the contents of the `/run/ For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional `-e UMASK=022` setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up [here](https://en.wikipedia.org/wiki/Umask) before asking for support. - ## User / Group Identifiers When using volumes (`-v` flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user `PUID` and group `PGID`. @@ -163,53 +198,17 @@ Ensure any volume directories on the host are owned by the same user you specify In this instance `PUID=1000` and `PGID=1000`, to find yours use `id user` as below: -``` +```bash $ id username uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup) ``` -## Application Setup - -Webui can be found at `http://:8096` - -Emby has very complete and verbose documentation located [here](https://github.com/MediaBrowser/Wiki/wiki) . - -Hardware acceleration users for Intel Quicksync and AMD VAAPI will need to mount their /dev/dri video device inside of the container by passing the following command when running or creating the container: - -```--device=/dev/dri:/dev/dri``` - -We will automatically ensure the abc user inside of the container has the proper permissions to access this device. - -Hardware acceleration users for Nvidia will need to install the container runtime provided by Nvidia on their host, instructions can be found here: - -https://github.com/NVIDIA/nvidia-docker - -We automatically add the necessary environment variable that will utilise all the features available on a GPU on the host. Once nvidia-docker is installed on your host you will need to re/create the docker container with the nvidia container runtime `--runtime=nvidia` and add an environment variable `-e NVIDIA_VISIBLE_DEVICES=all` (can also be set to a specific gpu's UUID, this can be discovered by running `nvidia-smi --query-gpu=gpu_name,gpu_uuid --format=csv` ). NVIDIA automatically mounts the GPU and drivers from your host into the emby docker. - -### OpenMAX (Raspberry Pi) - -Hardware acceleration users for Raspberry Pi OpenMAX will need to mount their /dev/vchiq video device inside of the container and their system OpenMax libs by passing the following options when running or creating the container: -``` ---device=/dev/vchiq:/dev/vchiq --v /opt/vc/lib:/opt/vc/lib -``` - -### V4L2 (Raspberry Pi) - -Hardware acceleration users for Raspberry Pi V4L2 will need to mount their /dev/video1X devices inside of the container by passing the following options when running or creating the container: -``` ---device=/dev/video10:/dev/video10 ---device=/dev/video11:/dev/video11 ---device=/dev/video12:/dev/video12 -``` - - ## Docker Mods + [![Docker Mods](https://img.shields.io/badge/dynamic/yaml?color=94398d&labelColor=555555&logoColor=ffffff&style=for-the-badge&label=emby&query=%24.mods%5B%27emby%27%5D.mod_count&url=https%3A%2F%2Fraw.githubusercontent.com%2Flinuxserver%2Fdocker-mods%2Fmaster%2Fmod-list.yml)](https://mods.linuxserver.io/?mod=emby "view available mods for this container.") [![Docker Universal Mods](https://img.shields.io/badge/dynamic/yaml?color=94398d&labelColor=555555&logoColor=ffffff&style=for-the-badge&label=universal&query=%24.mods%5B%27universal%27%5D.mod_count&url=https%3A%2F%2Fraw.githubusercontent.com%2Flinuxserver%2Fdocker-mods%2Fmaster%2Fmod-list.yml)](https://mods.linuxserver.io/?mod=universal "view available universal mods.") We publish various [Docker Mods](https://github.com/linuxserver/docker-mods) to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above. - ## Support Info * Shell access whilst the container is running: