diff --git a/docs/images/docker-faster-whisper.md b/docs/images/docker-faster-whisper.md index 28d255ef21..6cdf207375 100755 --- a/docs/images/docker-faster-whisper.md +++ b/docs/images/docker-faster-whisper.md @@ -31,7 +31,7 @@ The architectures supported by this image are: | Architecture | Available | Tag | | :----: | :----: | ---- | | x86-64 | ✅ | amd64-\ | -| arm64 | ❌ | | +| arm64 | ✅ | arm64v8-\ | | armhf | ❌ | | ## Version Tags @@ -41,7 +41,7 @@ This image provides various versions that are available via tags. Please read th | Tag | Available | Description | | :----: | :----: |--- | | latest | ✅ | Stable releases | -| gpu | ✅ | Releases with Nvidia GPU support | +| gpu | ✅ | Releases with Nvidia GPU support (amd64 only) | ## Application Setup @@ -119,7 +119,7 @@ Containers are configured using parameters passed at runtime (such as those abov | `PUID=1000` | for UserID - see below for explanation | | `PGID=1000` | for GroupID - see below for explanation | | `TZ=Etc/UTC` | specify a timezone to use, see this [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List). | -| `WHISPER_MODEL=tiny-int8` | Whisper model that will be used for transcription. From `tiny`, `base`, `small` and `medium`, all with `-int8` compressed variants | +| `WHISPER_MODEL=tiny-int8` | Whisper model that will be used for transcription. From [here](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/utils.py#L12-L31), all with `-int8` compressed variants | | `WHISPER_BEAM=1` | Number of candidates to consider simultaneously during transcription. | | `WHISPER_LANG=en` | Language that you will speak to the add-on. | @@ -342,6 +342,7 @@ To help with development, we generate this dependency graph. ## Versions +* **30.12.24:** - Add arm64 support for non-GPU builds. * **05.12.24:** - Build from Github releases rather than Pypi. * **18.07.24:** - Rebase to Ubuntu Noble. * **19.05.24:** - Bump CUDA to 12 on GPU branch.