Bot Updating Documentation

LinuxServer-CI 2025-08-23 12:54:27 +00:00
rodzic e4ba079f65
commit b8c1971cbf
Nie znaleziono w bazie danych klucza dla tego podpisu
1 zmienionych plików z 6 dodań i 0 usunięć

Wyświetl plik

@ -44,6 +44,7 @@ This image provides various versions that are available via tags. Please read th
| :----: | :----: |--- | | :----: | :----: |--- |
| latest | ✅ | Stable releases | | latest | ✅ | Stable releases |
| gpu | ✅ | Releases with Nvidia GPU support (amd64 only) | | gpu | ✅ | Releases with Nvidia GPU support (amd64 only) |
| gpu-legacy | ✅ | Legacy releases with Nvidia GPU support for pre-Turing cards (amd64 only) |
## Application Setup ## Application Setup
@ -78,6 +79,7 @@ services:
- PGID=1000 - PGID=1000
- TZ=Etc/UTC - TZ=Etc/UTC
- WHISPER_MODEL=tiny-int8 - WHISPER_MODEL=tiny-int8
- LOCAL_ONLY= #optional
- WHISPER_BEAM=1 #optional - WHISPER_BEAM=1 #optional
- WHISPER_LANG=en #optional - WHISPER_LANG=en #optional
volumes: volumes:
@ -96,6 +98,7 @@ docker run -d \
-e PGID=1000 \ -e PGID=1000 \
-e TZ=Etc/UTC \ -e TZ=Etc/UTC \
-e WHISPER_MODEL=tiny-int8 \ -e WHISPER_MODEL=tiny-int8 \
-e LOCAL_ONLY= `#optional` \
-e WHISPER_BEAM=1 `#optional` \ -e WHISPER_BEAM=1 `#optional` \
-e WHISPER_LANG=en `#optional` \ -e WHISPER_LANG=en `#optional` \
-p 10300:10300 \ -p 10300:10300 \
@ -122,6 +125,7 @@ Containers are configured using parameters passed at runtime (such as those abov
| `PGID=1000` | for GroupID - see below for explanation | | `PGID=1000` | for GroupID - see below for explanation |
| `TZ=Etc/UTC` | specify a timezone to use, see this [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List). | | `TZ=Etc/UTC` | specify a timezone to use, see this [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List). |
| `WHISPER_MODEL=tiny-int8` | Whisper model that will be used for transcription. From [here](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/utils.py#L12-L31), all with `-int8` compressed variants | | `WHISPER_MODEL=tiny-int8` | Whisper model that will be used for transcription. From [here](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/utils.py#L12-L31), all with `-int8` compressed variants |
| `LOCAL_ONLY=` | If set to `true`, or any other value, the container will not attempt to download models from HuggingFace and will only use locally-provided models. |
| `WHISPER_BEAM=1` | Number of candidates to consider simultaneously during transcription. | | `WHISPER_BEAM=1` | Number of candidates to consider simultaneously during transcription. |
| `WHISPER_LANG=en` | Language that you will speak to the add-on. | | `WHISPER_LANG=en` | Language that you will speak to the add-on. |
@ -346,6 +350,8 @@ To help with development, we generate this dependency graph.
## Versions ## Versions
* **20.08.25:** - Add gpu-legacy branch for pre-Turing cards.
* **10.08.25:** - Add support for local-only mode.
* **30.12.24:** - Add arm64 support for non-GPU builds. * **30.12.24:** - Add arm64 support for non-GPU builds.
* **05.12.24:** - Build from Github releases rather than Pypi. * **05.12.24:** - Build from Github releases rather than Pypi.
* **18.07.24:** - Rebase to Ubuntu Noble. * **18.07.24:** - Rebase to Ubuntu Noble.