GitBook: [master] 9 pages modified

pull/1/head
Josh Stark 2019-01-27 11:24:19 +00:00 zatwierdzone przez gitbook-bot
rodzic 6f11801ec1
commit 57e08cbe44
Nie znaleziono w bazie danych klucza dla tego podpisu
ID klucza GPG: 07D2180C7B12D0FF
9 zmienionych plików z 28 dodań i 14 usunięć

Wyświetl plik

@ -1,3 +1,4 @@
# Introduction
This is the home of all LinuxServer image documentation, including guides and knowledge sharing articles.

Wyświetl plik

@ -1,5 +1,6 @@
# Summary
# Table of contents
* [Introduction](README.md)
* [General](general/README.md)
* [Container Execution](general/container-execution.md)
* [Docker Containers: 101](general/containers-101.md)
@ -8,3 +9,4 @@
* [Understanding PUID and PGID](general/understanding-puid-and-pgid.md)
* [Updating our containers](general/updating-our-containers.md)
* [Volumes](general/volumes.md)

Wyświetl plik

@ -44,3 +44,4 @@ Or the container:
```bash
docker inspect -f '{{ index .Config.Labels "build_version" }}' linuxserver/<image_name>
```

Wyświetl plik

@ -10,7 +10,7 @@ To get started, not much. You will need to know about some of the terminology or
docker run hello-world
```
That's it, your first docker container. It pre-supposes you have [docker installed](https://github.com/IronicBadger/til/blob/master/docker/yum-apt-repos-docker.md) but that's all it takes to run a container. You didn't need to know anything about installed what that app needed to run - this is the key benefit. `hello-world` is a simple example but imagine you have a complex application with a large number of dependencies and it is tied to a specific version of Python or Java. Then imagine you have a second app again tied to a specific, but different, version of Java or Python. Now you have to try and ensure these two (often conflicting) versions sit on the same host and play nice. In the world of containers these two versions can operate in complete isolation from one another. Bliss.
That's it, your first docker container. It pre-supposes you have [docker installed](https://github.com/IronicBadger/til/blob/master/docker/yum-apt-repos-docker.md) but that's all it takes to run a container. You didn't need to know anything about installed what that app needed to run - this is the key benefit. `hello-world` is a simple example but imagine you have a complex application with a large number of dependencies and it is tied to a specific version of Python or Java. Then imagine you have a second app again tied to a specific, but different, version of Java or Python. Now you have to try and ensure these two \(often conflicting\) versions sit on the same host and play nice. In the world of containers these two versions can operate in complete isolation from one another. Bliss.
## Key Terminology
@ -18,13 +18,13 @@ There are a few terms you might find useful to understand when working with cont
* **docker** - the first, and most popular, container runtime - it sits as an abstraction layer between the kernels features such as cgroups or namespaces and running applications
* **container** - a sandboxed process isolated in memory and running instance of an image
* **image** - a pre-built filesystem in a format ready to be understood by a container runtime (usually docker)
* **image** - a pre-built filesystem in a format ready to be understood by a container runtime \(usually docker\)
* **volume** - use volumes to persist data outside of the containers sandboxed filesystem
* **environment** - a way of configuring the sandboxed environment your container runs in
## Key Concepts
Containers are completely sandboxed environments by the Linux kernel. It may help you to think of them *somewhat* like a small VM however in practice this is largely false. The Linux kernel controls access to various system resources utilising control groups (cgroups). We rely on docker to translate these complex concepts into simple ones that users can understand and consume.
Containers are completely sandboxed environments by the Linux kernel. It may help you to think of them _somewhat_ like a small VM however in practice this is largely false. The Linux kernel controls access to various system resources utilising control groups \(cgroups\). We rely on docker to translate these complex concepts into simple ones that users can understand and consume.
By default a running container has absolutely no context of the world around it. Out the box you cannot connect from the outside world to the running webservers on ports 80 and 443 below. To allow entry to the sandbox from the outside world we must explicitly allow entry using the `-p` flag.
@ -32,6 +32,7 @@ By default a running container has absolutely no context of the world around it.
docker run -d --name=letsencrypt -p 80:80 -p 443:443 linuxserver/letsencrypt
```
Take this concept and multiply it across all aspects of a running application. Ports, volumes (i.e. the files you want to be available inside the container from outside the container), environment variables and so on. For us as developers this allows us to isolate your system from troubleshooting as the box the container is running in (the container) is identical to the next.
Take this concept and multiply it across all aspects of a running application. Ports, volumes \(i.e. the files you want to be available inside the container from outside the container\), environment variables and so on. For us as developers this allows us to isolate your system from troubleshooting as the box the container is running in \(the container\) is identical to the next.
Containers are an amazing way to run applications in a secure, sandboxed way.

Wyświetl plik

@ -42,4 +42,5 @@ if [ -f ~/.bash_aliases ]; then
fi
```
Once configured, log out and the log in again. Now you can type `dcpull` or `dcp up -d` to manage your entire fleet of containers at once. It's like magic.
Once configured, log out and the log in again. Now you can type `dcpull` or `dcp up -d` to manage your entire fleet of containers at once. It's like magic.

Wyświetl plik

@ -6,8 +6,8 @@
We have curated various base images which our main application images derive from. This is beneficial for two main reasons:
- A common dependency base between multiple images, reducing the likelihood of variation between two or more applications that share the same dependencies.
- Reduction in image footprint on your host machine by fully utilising Docker's image layering system. Multiple containers running locally that share the same base image will reuse that image and any of its ancestors.
* A common dependency base between multiple images, reducing the likelihood of variation between two or more applications that share the same dependencies.
* Reduction in image footprint on your host machine by fully utilising Docker's image layering system. Multiple containers running locally that share the same base image will reuse that image and any of its ancestors.
### The `/config` volume
@ -28,3 +28,4 @@ docker create \
-p <host_port>:<app_port> \
linuxserver/<image_name>
```

Wyświetl plik

@ -1,8 +1,12 @@
# Understanding PUID and PGID
{% hint style="info" %}
We are aware that recent versions of the Docker engine have introduced the `--user` flag. Our images are not yet compatible with this, so we recommend continuing usage of PUID and PGID.
{% endhint %}
## Why use these?
Docker runs all of its containers under the `root` user domain because it requires access to things like network configuration, process management, and your filesystem. This means that the processes running inside your containers also run as `root`. This kind of elevated access is not ideal for day-to-day use, and potentially gives applications the access to things they shouldn't (although, a strong understanding of volume and port mapping will help with this).
Docker runs all of its containers under the `root` user domain because it requires access to things like network configuration, process management, and your filesystem. This means that the processes running inside your containers also run as `root`. This kind of elevated access is not ideal for day-to-day use, and potentially gives applications the access to things they shouldn't \(although, a strong understanding of volume and port mapping will help with this\).
Another issue is file management within the container's mapped volumes. If the process is running under `root`, all files and directories created during the container's lifespan will be owned by `root`, thus becoming inaccessible by you.
@ -29,3 +33,4 @@ It is most likely that you will use the `id` of yourself, which can be obtained
```bash
id $user
```

Wyświetl plik

@ -18,7 +18,7 @@ docker stop <container_name>
Once the container has been stopped, remove it.
> **Important**: Did you remember to persist the <code>/config</code> volume when you originally created the container? Bear in mind, you'll lose any configuration inside the container if this volume was not persisted. [Read up on why this is important](/docs/running-our-containers#the-code-classlanguage-textconfigcode-volume).
> **Important**: Did you remember to persist the `/config` volume when you originally created the container? Bear in mind, you'll lose any configuration inside the container if this volume was not persisted. [Read up on why this is important](https://github.com/linuxserver/docker-documentation/tree/2fbbd392c7399b6dc743a8c7ea97e2e124870cff/docs/running-our-containers/README.md#the-code-classlanguage-textconfigcode-volume).
```bash
docker rm <container_name>
@ -34,7 +34,7 @@ docker pull linuxserver/<image_name>
### Recreate the container
Finally, you can recreate the container. This is often cited as the most arduous task as it requires you to remember all of the mappings you set beforehand. You can help mitigate this step by using Docker Compose instead - this topic has been [outlined in our documentation](/docs/started-with-compose).
Finally, you can recreate the container. This is often cited as the most arduous task as it requires you to remember all of the mappings you set beforehand. You can help mitigate this step by using Docker Compose instead - this topic has been [outlined in our documentation](https://github.com/linuxserver/docker-documentation/tree/2fbbd392c7399b6dc743a8c7ea97e2e124870cff/docs/started-with-compose/README.md).
```bash
docker create \
@ -45,3 +45,4 @@ docker create \
-p <host_port>:<app_port> \
linuxserver/<image_name>
```

Wyświetl plik

@ -1,18 +1,18 @@
# Volumes
In Docker terminology, a _volume_ is a storage device that allows you to persist the data used and generated by each of your running containers. While a container remains alive (in either an active or inactive state), the data inside its user-space remains intact. However, if you decide to recreate a container, all data within that container is lost. Volumes are an intrinsic aspect of container management, so it is useful to know how to create them.
In Docker terminology, a _volume_ is a storage device that allows you to persist the data used and generated by each of your running containers. While a container remains alive \(in either an active or inactive state\), the data inside its user-space remains intact. However, if you decide to recreate a container, all data within that container is lost. Volumes are an intrinsic aspect of container management, so it is useful to know how to create them.
There are two ways to map persistent storage to your containers; container volumes, and directory overlays. All of our images reference persistent data by means of directory overlays.
## Mapping a volume to your container
Firstly, you must understand which directories from _within_ your container you wish to persist. All of our images come with side-by-side documentation on which internal directories are used by the application. As mentioned in the [Running our Containers](/docs/running-our-containers) documentation, the most common directory you will wish to persist is the `/config` directory.
Firstly, you must understand which directories from _within_ your container you wish to persist. All of our images come with side-by-side documentation on which internal directories are used by the application. As mentioned in the [Running our Containers](https://github.com/linuxserver/docker-documentation/tree/2f6de18bf0244462248642628a930c9b4e1182f2/docs/running-our-containers/README.md) documentation, the most common directory you will wish to persist is the `/config` directory.
Before you create your container, first create a directory on the host machine that will act as the home for your persisted data. We recommend creating the directory `/opt/appdata`. Under this tree, you can create a single configuration directory for each of your containers.
When creating the container itself, now is the time to make use of the `-v` flag, which will tell Docker to overlay your host directory over the container's directory:
```bash{2}
```text
docker create --name my_container \
-v /opt/appdata/my_config:/config \
linuxserver/<an_image>
@ -23,3 +23,4 @@ The above example shows how the usage of `-v` has mapped the host machine's `/op
> **Remember**: When dealing with mapping overlays, it always reads `host:container`
You can do this for as many directories as required by either you or the container itself. Our rule-of-thumb is to _always_ map the `/config` directory as this contains pertinent runtime configuration for the underlying application. For applications that require further data, such as media, our documentation will clearly indicate which internal directories need mapping.