Docker Concepts: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(859 intermediate revisions by the same user not shown)
Line 1: Line 1:
=External=
* https://docs.docker.com/glossary/
=Internal=
=Internal=


* [[Docker#Subjects|Docker]]
* [[Docker#Subjects|Docker]]
* [[Kubernetes Concepts]]
* [[OpenShift Concepts|OpenShift Concepts]]


=Overview=
=Overview=


Docker is at the same time a ''packaging format'', a ''set of tools'' with server and client components, and a ''development and operations workflow''. Because it defines a workflow, Docker can be seen as a tool that reduces the complexity of communication between the development and the operations teams.  
Docker is at the same time a [[#Image|container image]] packaging format, a set of tools with [[#The_Docker_Server|server]] and [[#The_Docker_Client|client]] components, and a [[#Docker_Workflow|development and operations workflow]]. Because it defines a workflow, Docker can be seen as a tool that reduces the complexity of communication between the development and the operations teams. The ideal Docker application use cases are stateless applications or applications that externalize their state in databases or caches: web frontends, backend APIs and short running tasks. 
 
Docker architecture centers on atomic and throwaway [[#Container|containers]]. During the deployment of a new version of an application, the whole runtime environment of the old version of the application is thrown away with it, including [[#Dependencies|dependencies]], configuration, all the way to, but excluding the O/S kernel. This means the new version of the application won't accidentally use artifacts left by the previous release, and the ephemeral debugging changes performed inside the container, if any, will not survive. This approach also makes the application portable between host, which act as places where to dock standardized containers. The only requirement of a container from a host is a kernel that supports containers. The Linux kernel (see "[[#Architecture|Architecture]]" below) has provided support for container technologies for years, but more recently the Docker project has developed a convenient management interface for containers on a host.
 
A Docker release artifact is a single file, whose format is standardized. It consists of a set of [[#Layer|layers]] assembled in an [[#Image|image]].
 
<span id='Cloud_Platform'></span>Docker is not a cloud platform. It only handles containers on pre-existing Docker hosts. It does not allow to create new hosts, object stores, block storage, and other resources that can be provisioned dynamically by a cloud platform.
 
=Architecture=
 
Containers require several kernel-level mechanisms to be available to work correctly:
* <span id='Namespaces'></span>'''Process isolation''' is provided by the kernel [[Linux_Namespaces#Overview|namespaces]] mechanism. By default, all containers have [[Linux_Namespaces#PID_Namespace|PID Namespace]], [[Linux_Namespaces#UTS_Namespace|UTS Namespace]] enabled.
* <span id='cgroups'></span>Capability to '''control container's access to the system resources''' is provided by the [[Linux_cgroups#Docker_and_cgroups|croups]] mechanism. For each container, one cgroup is created in each hierarchy. The cgroup is "lxc/<container-name>".
* '''Security''' that comes from separation between the host and the container, and between individual containers is enforced with [[Selinux|SELinux]].
 
=Container=
A '''Linux container''' is a lightweight mechanism for isolating running processes, so these processes interact only with designated resources. The primary aim of containers is to make programs easy to deploy in a way that does not cause them to break.
 
The process tree runs in a segregated environment provided by the operating system, with restricted access to these resources, and the container allows the administrator to monitor resource usage. Inbound or outbound external access is done via a [[Docker_Networking_Concepts#Virtual_Network_Adapter|virtual network adapter]]. From an application's perspective, it looks like the application is running alone inside its own O/S installation. An image encapsulates '''all''' files required to run an application - all the [[#Dependencies|dependencies]] of an application and its configuration - and it can be deployed on any environment that has support for running containers. The same bundle can be assembled, tested and shipped to production without any change. From this perspective, container images are a packaging technology.
 
Multiple applications can be run in containers on the same host, and each application won't have visibility into other applications' processes, files, network, etc. Typically, each container provides a single service, often called a [[Microservices#Overview|microservice]]. While it is technically possible to run multiple services within a container, this is generally not considered a best practice: the fact that a container provides a single functions makes it theoretically easy to scale horizontally.
 
A '''Docker container''' is a Linux container that has been instantiated from a [[#Image|Docker image]]. Physically, the Docker container is a reference to a [[#Image|layered filesystem image]] and some configuration metadata ([[#Environment_Variables|environment variables]], for example). The detailed information that goes along with a container can be displayed with [[docker inspect]].
 
==<span id='Docker_and_Virtualization'></span>Containers and Virtualization==
 
Containers implement virtualization above the O/S kernel level.
 
In case of O/S virtualization, a virtual machine contains a complete guest operating system and runs its own kernel, in top of the host operating system. The [[Virtualization_Concepts#Hypervisor|hypervisor]] that manages the VMs and the VMs use a percentage of the system's hardware resources, which are no longer available to the applications.
 
A container is just another process, with a lightweight wrapper around it, that interacts directly with the Linux kernel, and can utilize more resources that otherwise would have gone to hypervisor and the VM kernel. The container includes only the application and its [[#Dependencies|dependencies]]. It runs as an isolated process in user space, on the host's operating system. The host and all containers share the same kernel.
 
A virtual machine is long lived in nature. Containers have usually shorter life spans.
 
The isolation among containers is much more limited however than the isolation among virtual machines. A virtual machine has default hard limits on hardware resources it can use. Unless configured otherwise, by placing explicit limits on resources containers can use, they compete for resources.
 
==Container Metadata==
{{Internal|Image and Container Metadata|Image and Container Metadata}}
===Container ID===
The long value can be obtained with:
<syntaxhighlight lang='bash'>
docker inspect --format="{{.Id}}" <''short-container-ID''>|<''container-name''>
</syntaxhighlight>
===<span id='Image_a_Container_is_Created_From'></span>The Name of the Image a Container is Created From===
The name of an image the container was instantiated from can be obtained by running [[docker ps]]. The image name is found in the "IMAGE" column.
 
==<span id='The_Container_Layer'></span>Difference Between Containers and Images - a Writable Layer==
Once instantiated, a container represents the '''runtime instance''' of the image it was instantiated from. The difference between the image and a container instantiated from it consists of '''an extra writable layer''', which is added on top of the topmost layer of the image. This layer is often called the "'''container layer'''". All activity inside the container that adds new data or modifies existing data - writing new files, modifying existing files or deleting files - will result in changes being stored in the writable layer. Any files the container does not change do not get copied in the writable layer, which means the writable layer is kept as small as possible. When an existing file is modified, the [[Docker_Storage_Concepts#Storage_Driver|storage driver]] performs a [[Docker_Storage_Concepts#Copy-on-Write_.28CoW.29_Strategy|copy-on-write]] operation.
 
The state of this writable layer can be inspected at runtime by logging into the container, or it can be exported with [[docker export]] and inspected offline. Because each container has its own writable container layer, which store the changes that are particular to a specific container, multiple containers can share access to the same underlying image and yet maintain their own state. If multiple containers must share access to the same state, it should be done by storing the data in a [[Docker_Storage_Concepts#Data_Volume|volume]] mounted in all the containers. Volumes should also be used for write-heavy application, which should not store data in the container.
 
When the container is stopped with [[docker stop]], the writable layer's state is preserved, so when the container is restarted with [[docker start]], the runtime container regains access to it. When the container is deleted with [[docker rm]], the writable layer is discarded so all the changes to the image are lost, but the underlying image remains unchanged.
 
<span id='Writable_Layer_Size'></span>The size of the writable layer is reported as "size" by [[Docker_ps#-s|docker ps -s]].
===Container Root Filesystem Size===
{{External|https://docs.docker.com/engine/reference/commandline/dockerd/#storage-driver-options}}
At runtime, the container root file system is stored on a base device, which limits the size of the root file system. The default value is 10GB. The device size can be increased at daemon restart which will allow all future images and containers (based on those new images) to be of the new base device size:
<syntaxhighlight lang='text'>
dockerd --storage-opt dm.basesize=50G [...]
</syntaxhighlight>
 
==<span id='Interaction_with_a_Container'></span>stdin/stdout/stderr Interaction with a Container==
{{External|https://docs.docker.com/engine/reference/run/#detached-vs-foreground}}
A container can run in [[#Foreground_Mode|foreground]] mode or in [[#Detached_Mode|detached]] (background) mode. By default, a container starts in foreground mode. While in foreground or detached mode, the container may or may not be in [[#Interactive_Mode|interactive mode]].
===Foreground Mode===
{{External|https://docs.docker.com/engine/reference/run/#foreground}}
A container starts in foreground mode by default, if no argument is provided to the [[docker run]] command. In foreground mode, the Docker runtime attaches the container root process' stdout and stderr to the stdout and stderr of the shell that invokes the docker run command, so anything produced by the root process at stdout and stderr is immediately visible in the controlling terminal.
<syntaxhighlight lang='bash'>
docker run <image>
</syntaxhighlight>
The [[Docker_run#-a.2C_--attach|-a|--attach]] docker run option allows specifying which individual stream (stdin, stdout, stderr) to attach, so the default behavior is equivalent with:
<syntaxhighlight lang='bash'>
docker run -a stdout -a stderr <image>
</syntaxhighlight>
 
Foreground mode does not imply that the root process keeps its stdin open, nor that the stdout of the controlling terminal is attached to it. To send content into the root process via its stdin, the container must be started in [[#Interactive_Mode|interactive mode]].
 
Note that if the controlling terminal stdout is attached to the container root process' stdin with -a stdin, but the container is not started in [[#Interactive_Mode|interactive mode]], the content sent by the controlling terminal does not propagate to the root process, because its stdin is not open.
 
===Interactive Mode===
A container is started in interactive mode if its root process keeps stdin open after startup.
 
A container can be started in interactive mode both in [[#Foreground_Mode|foreground]] and [[#Detached_Mode|detached]] mode. For a container started in foreground and interactive mode, the stdin of the root process will be immediately attached to the stdout of the current shell, so anything typed into the shell will we forwarded to the stdin of the process:
<syntaxhighlight lang='bash'>
docker run -i <image>
</syntaxhighlight>
Note that, unless [[Docker_run#-i.2C_--interactive|-i|--interactive]] is specified, a container is started by default in non-interactive mode, so the stdin of the container process is immediately closed. Also note that interactive mode does not necessarily imply that the root process is associated with a TTY device. The root process of the container will be associated with a TTY device if the container was explicitly started with this option. For more details on TTY devices, see [[#Association_with_a_TTY_Device|Association with a TTY Device]].
 
===Association with a TTY Device===
By default, the root process of a container is not associated with any TTY device.
 
However, the process can be associated with a TTY device if the [[Docker_run#-t.2C_--tty|-t|--tty]] option used at startup. This is necessary for shell interaction with the container, where interactive commands are sent into the container and the output of the container process is needed in the terminal.
 
If a TTY device is associated with the container and the container starts in [[#Foreground_Mode|foreground mode]], no new TTY device needs to be allocated, the container root process will be associated with the same TTY device as the controlling shell. If the container is started in [[#Detached_Mode|detached mode]], a new TTY device will be allocated and attached to the container root process.
 
The association with a TTY device is enabled by:
<syntaxhighlight lang='bash'>
docker run -t|--tty ...
</syntaxhighlight>
 
===<span id='Detached_Mode'></span>Detached (Background) Mode===
 
The detached mode is characterized by the fact that the stdin, stdout and stderr of the container's root process are disconnected from the process running the docker command that launches the container, so the detached container cannot be interacted with via stdin/stdout/stdout. Interaction with a detached container can only be done via [[Docker_Networking_Concepts#Overview|network]] or [[Docker_Storage_Concepts#Data_Volume|volumes]].
 
To start a container in the detached mode, use the [[Docker_run#-d.2C_--detach|-d docker run option]]:
 
<syntaxhighlight lang='bash'>
> docker run -d|--detach <image>
dcb09d297c4aa0bf2144a1fa16c948bb68622321955d820d1c3f2543f6c9147d
>
</syntaxhighlight>
The container ID will be displayed by the shell, which will continue to interpret commands as usual. Containers started in detached mode exit when the root process exits.
 
It is possible to start the container in detached and [[#Interactive_Mode|interactive]] mode. In this case the container root process' stdin will stay open, and it could be later attached to with [[docker attach]] for as long as it is running. However, the command shell stdout will not be attached to the container root process' stdin, so commands typed into the current shell will continue to be interpreted as usual.
 
[[Docker_run#-a.2C_--attach|-a|--attach]] and [[Docker_run#-d.2C_--detach|-d|--detach]] are mutually exclusive.
 
==Container Lifecycle==
 
===Container Execution Sequence===
Once [[Docker_run#Overview|docker run]] is executed, the following sequence takes place:
* The Docker server checks whether the image to be run is available in the [[#Local_Image_Registry|local image registry]].
* If the image is not available in the local image registry, the docker server contacts its configured remote registries and attempts to download the image from the. If the image is found, it is downloaded and cached locally in the [[#Local_Image_Registry|local image registry]].
* The Docker server creates a set of [[Linux Namespaces#Overview|namespaces]] and [[Linux cgroups#Overview|control groups]] for the container.
* The Docker server allocates and mounts a read-write layer. This layer will become the container's [[#Difference_Between_Containers_and_Images_-_a_Writable_Layer|writable layer]].
* The Docker server allocates the virtual network interface that will be used by the container to connect to the server's networking system.
* The networking system allocates an IP address for the virtual network interface.
* The process specified by the [[Dockerfile#ENTRYPOINT_and_CMD|ENTRYPOINT/CMD]] combination is executed.
* The Docker server connects and logs stdin/stdout/stderr depending on the run command configuration, specifically the presence of the [[Docker_run#-i.2C_--interactive|-i|--interactive]], [[Docker_run#-d.2C_--detach|-d|--detach]] and [[Docker_run#-t.2C_--tty|-t|--tty]] options.
* During its execution, the process may create children processes, which execute within the same container. However, the life of the container is controlled by the life of its root process, that has PID 1.
* The container will exit when the [[Linux General Concepts#The_Main_Thread|main thread]] of the root process terminates. For more details see [[#Container_Exit|Container Exit]].
 
===Container Exit===
The root process of a container runs as PID 1.
 
A container usually exits when the [[Linux_General_Concepts#Main_Thread |main thread]] of its root process terminates, irrespective on whether it was started in [[#Interactive_Mode|interactive]] or non-interactive mode, [[#Detached_Mode|detached]] or non-detached mode, or it has a [[#Association_with_a_TTY_Device|TTY device associated with it]]. If the main process terminates, the entire container is stopped, killing any child processes you may have launched from your PID 1 process.
 
All containers on a Docker server will be forcibly terminated if the Docker server exits. They [[Docker_run#--restart|can be configured]] to [[#Restart_Policy|restart automatically]] on Docker server restart.
 
===Restart Policy===
 
The restart policy refers to the behavior of the Docker server when a specific container exits. It can be configured when the container is started with [[Docker_run#--restart|docker run]], or in [[Docker_Container_Configuration#RestartPolicy|hostconfig.json]]. Possible options: "no", "always".
 
==<span id='Docker_Logging'></span>Logging==
 
{{External|https://docs.docker.com/engine/admin/logging/overview/#none}}
 
{{External|https://docs.docker.com/config/containers/logging/configure/}}
 
Container logging consists in content sent to the stdout and stderr by the process (processes) running within the container.
 
By default, the logging information gets translated into JSON records and written on the docker server files system in <tt>/var/lib/docker/containers/''container-id''/''container-id''-json.log</tt> and it cannot be accessed with [[docker logs]].
 
==Configuration==
 
The container configuration can be accessed with [[docker inspect]] and it can be edited with [[docker update]]. It is also available on the docker server under /var/lib/docker/containers/<''container-id''>. More details about specific files and fields:
 
{{Internal|Docker Container Configuration|Docker Container Configuration}}
 
==Pause Container==
{{External|https://www.ianlewis.org/en/almighty-pause-container}}
 
A pause container is a container responsible with holding the network namespace, creating shared network, assigning IP addresses, etc. for a set of other internal containers. It is how [[Kubernetes Concepts#Pod|pods]] are implemented in Kubernetes. Normally if the last process in a network namespace dies, the namespace would be destroyed. A pause container avoids that while the internal container can be killed and restarted.
==<span id='Environment_Variables'></span>Containers and Environment Variables==
Containerized applications must avoid maintaining configuration in filesystem files - if they do, it limits the reusability of the container. A common pattern used to handle application configuration is to move configuration state into environment variables that can be passed to the application from the container. Docker supports environment variables natively, they are stored in the metadata that makes up a container configuration, and restarting the container will ensure the same configuration is passed to the application each time.
 
==Container Best Practices==
{{Internal|Docker Container Best Practices|Container Best Practices}}
 
=<span id='Images'></span><span id='Container_Image'></span>Image=
 
Logically, a '''Docker image''' is a set of stacked layers, where each [[#Layer|layer]] represents the result of the execution of a [[Dockerfile#Instructions|Dockerfile instruction]]. Each layer, except for the last one, [[#Difference_Between_Containers_and_Images_-_a_Writable_Layer|the container layer]], is read-only, and it only contains differences from the layer before it. The details related to how these layers interact with each other are handled by the [[Docker_Storage_Concepts#Storage_Driver|storage driver]]. Physically, a Docker image is a configuration objects, or a '''manifest''', which specifies in JSON format, among other things, an ordered list of layer digests, which enables docker to assemble a container's filesystem with reference to layer digests rather than parent images:
 
<syntaxhighlight lang='json'>
{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
  "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "size": 32501,
      "digest": "sha256:8...e"
  },
  "layers": [
      {
        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
        "size": 39618920,
        "digest": "sha256:5...d"
      },
      {
        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
        "size": 1747,
        "digest": "sha256:4...0"
      },
      [...]
      {
        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
        "size": 2238,
        "digest": "sha256:8...3"
      }
  ]
}
</syntaxhighlight>
 
For differences between an image and a container, see [[#Difference_Between_Containers_and_Images_-_a_Writable_Layer|Difference Between Containers and Images]] above.
 
The image is produced by the [[docker build|build]] command, as the sole artifact of the build process. When an image needs to be rebuilt, every single layer after the first introduced change will need to be rebuilt.
 
The space occupied on disk by a container can be estimated based on the output of the [[Docker_ps#-s|docker ps -s]] command, which provides [[Docker_ps#Size|size]] and [[Docker_ps#Virtual_Size|virtual size]] information. For accounting of the space occupied by container logging, which may be non-trivial, see [[#Logging|logging]].


Docker architecture centers on atomic and throwaway [[#Container|containers]]. During the deployment of a new version of an application, the whole runtime environment of the old version of the application is thrown away with it, including [[#Dependencies|dependencies]], configuration, all the way to, but excluding the O/S kernel. This means the new version of the application won't accidentally use artifacts left by the previous release, and the ephemeral debugging changes are not going to survive. This approach also makes the application portable between servers, which act as places where to dock standardized containers.
Images are stored and accessed by the cryptographic checksum of their contents (the [[#Image_ID|image ID]]).


A Docker release artifact is a single file, whose format is standardized. It consists of a set of [[#Layered_Image|layered images]].
==Image Metadata==


=Docker Workflow=
Each image has an associated JSON structure which describes the image. The metadata includes creation date, author, the ID of the parent image,  execution/runtime configuration like its entry point, default arguments, CPU/memory shares, networking, and volumes. The JSON structure also references a cryptographic hash of each layer used by the image, and provides history information for those layers. This JSON structure is considered to be immutable, because changing it would change the computed ImageID. Changing it means creating a new derived image, instead of changing the existing image.
 
{{Internal|Image and Container Metadata|Image and Container Metadata}}
 
===Image ID===
The image ID is a digest calculated by applying the SHA256 algorithm to the [[#Image_Metadata|image metadata]], which, among other things, contains an ordered list of layer digests. The content that goes into calculating the digest can be examined with [[docker inspect]]. The first 12 digits of the image ID is displayed as "IMAGE ID" by the [[docker images]] command.
 
===Image Name===
The image name can be used as argument of the [[docker pull#Image/Repository_Name|docker pull]] command.
===<span id='Labels'></span>Label===
Labels represent metadata in the form of key/value pairs, and they can be specified with the Dockerfile [[Dockerfile#LABEL|LABEL]] command. Labels can be applied to [[#Container|containers]] and [[#Container_Image|images]] and they are useful in identifying and searching Docker images and containers. Labels applied to an image can be retrieved with [[docker inspect]] command.
 
==Base Image==
 
{{External|https://docs.docker.com/engine/userguide/eng-image/baseimages/}}
 
When a container is assembled from a [[Dockerfile]], the initial image upon which layers are being added is called the ''base image''. A base image has no parents. The base image is specified by the Dockerfile [[Dockerfile#FROM|FROM]] instruction. Once a base image was used to create a new image with [[docker build]], it becomes the [[#Parent_Image|parent image]] of the newly created image.
 
This is an article advising on base images to use: https://www.brianchristner.io/docker-image-base-os-size-comparison/. Base images used so far:
* [[centos Base Image]]
* [[rhel Base Image]]
* [[busybox Base Image]]
* [[alpine Base Image]]
 
==Parent Image==
 
An image’s ''parent image'' is the image designated in the [[Dockerfile#FROM|FROM]] directive in the image’s Dockerfile. All subsequent commands are applied to this parent image. A Dockerfile with no FROM directive has no parent image, and is called a [[#Base_Image|base image]]. The parent image ID can be obtained from the [[Image_and_Container_Metadata#Parent|image metadata]] with [[docker inspect]].
 
==Searching for Images==
 
The Docker client command [[docker search#Overview|docker search]] can be used to search for images in [[#Docker_Hub|Docker Hub]] or other repositories.
 
==Layer==
 
A ''layer'' of a [[#Image|Docker image]] represents the result of the execution of a [[Dockerfile#Instructions|Dockerfile instruction]]. Each layer is identified by an unique long hexadecimal number named <span id='hash'></span>''hash''. The hash is usually shortened to 12 digits. Each layer is stored in its own local directory inside Docker's [[#Local_Image_Registry|local image registry]] (however the directory names do not correspond to the layer IDs). The layers are [[#Docker_Revision_Control|version controlled]].
 
==Tag==
 
A ''tag'' is an alphanumeric identifier of the [[#Image|images]] within a repository, and it is generally used to identify a particular release of the image. It is a form of [[#Docker_Revision_Control|Docker revision control]]. Tags are needed because application develop over time, and a single image name can actually refer to many different versions of the same image. An image is uniquely identified by its [[#hash|hash]] and possibly by one or several tags. An image may be tagged in the local registry when the image is first built, using the [[Docker_build#-t.2C_--tag|-t option of the "docker build"]] command, or with the [[docker tag]] command. An image may have multiple tags. For example, the [[#The_.22latest.22_Tag|"latest"]] tag may be associated with a specific version tag.
 
A tag name must be valid ASCII and may contain lowercase and uppercase letters, digits, underscores, periods and dashes. A tag name may not start with a period or a dash and may contain a maximum of 128 characters.
 
See: {{Internal|docker tag|docker tag}}
 
===The "latest" Tag===
 
If the [[docker pull]] command is used without any explicitly specified tag, "latest" is implied. However, the "latest" tag must exist in the repository on the registry being accessed, for the command to work.
 
“latest” simply means “the last build/tag that ran without a specific tag/version specified”. For more on this, see [https://medium.com/@mccode/the-misunderstood-docker-tag-latest-af3babfd6375 The misunderstood Docker tag: latest].
 
===Docker Tag, Containers and Kubernetes Pods===
{{Internal|Docker Tag, Containers and Kubernetes Pods|Docker Tag, Containers and Kubernetes Pods}}
 
==URL==
 
A repository URL. The most generic format is:
 
[''registry''][:''port''][/''namespace''/]<''repository''>[:''tag'']
 
In not specified, the default registry is "docker.io", the namespace section is "/library/" and the default tag is "latest". More details about [[Docker_Concepts#The_.22latest.22_Tag|"latest"]].
 
==Union Filesystem==
Docker uses a ''union filesystem'' to combine all [[#Layer|layers]] within an image into a single coherent filesystem.
 
==Dependencies==
The [[#Docker_Workflow|Docker workflow]] allows all dependencies to be discovered during the development and test cycles.
 
==Dangling Image==
 
An image is said to be "dangling" if it is not associated with a repository name in a registry, usually the local registry:
 
REPOSITORY                          TAG                IMAGE ID            CREATED            SIZE
<none>                              <none>              0c0359fd3c0d        8 seconds ago      1.14MB
 
==Image Building==
===Builder Pattern===
{{External|https://blog.alexellis.io/mutli-stage-docker-builds/}}
The practice of maintaining one Dockerfile for development and a corresponding Dockerfile for production. The development Dockerfile contains the tools and libraries needed to build the application. The production Dockerfile is a slimmed-down version of the development Dockerfile, which only contains the application artifacts and exactly what is needed to run it. However, maintaining two related Dockerfile is not ideal. An alternative is to use a [[#Multi-Stage_Build|multi-stage build]].
 
===Build Cache===
{{Internal|Docker Build Cache|Docker Build Cache}}
 
===Multi-Stage Build===
 
{{External|https://docs.docker.com/engine/userguide/eng-image/multistage-build/}}
 
{{External|https://blog.alexellis.io/mutli-stage-docker-builds/}}
 
A more efficient replacement for the [[Docker Concepts#Builder_Pattern|builder pattern]].
 
A multi-stage build has two advantages: it avoids placing tools and unneeded files in the final image, and generates smaller images.
 
The general syntax involves adding FROM additional times within the Dockerfile and naming build stages. Whichever is the last FROM statement is the final base image. To copy artifacts and outputs from intermediate images use COPY --from=<base_image_name>:
 
<syntaxhighlight lang='Docker'>
FROM something AS my_builld
 
# This results in a single layer image
FROM alpine:latest 
COPY --from=my_builld  ...
</syntaxhighlight>
 
Also see: {{Internal|Docker_build#Multi-Stage_Build|docker build - Multi-Stage Build}}
 
===Best Practices for Creating Images===
{{Internal|Docker_Container_Best_Practices#Best_Practices_for_Creating_Images|Docker Container Best Practices &#124; Best Practices for Creating Images}}
==Multi-Architecture Container Image==
{{Internal|Multi-Architecture Container Images#Overview|Multi-Architecture Container Images}}
==<tt>scratch</tt>==
An empty Docker image with no operating system files.
 
=Context=
{{External|https://docs.docker.com/engine/context/working-with-contexts}}
 
A Docker context contains all information required to manage resources on a Docker daemon. This information includes: name and description of the context, the Docker daemon endpoint configuration and TLS info. The <code>docker context</code> command can be used to manage the Docker contexts.
 
<syntaxhighlight lang='bash'>
docker context list
</syntaxhighlight>
 
=Dockerfile=
 
A Dockerfile defines how a container should look at build time, and it contains all the steps that are required to create an [[#Image|layered image]]. Each command in the Dockerfile generates a new [[#Layer|layer]] in the image. The Dockerfile is an argument, possibly implicit, if present in the directory the command is run from, of the [[docker build|build]] command. For more details, see: {{Internal|Dockerfile|Dockerfile}}
 
==Docker Image DSL==
 
Docker defines its own Domain Specific Language (DSL) for creating Docker images.
 
{{External| https://docs.docker.com/engine/reference/builder/}}
 
==.dockerignore==
 
{{Internal|.dockerignore|.dockerignore}}


A ''Docker workflow'' represent the sequence of operations required to develop, test and deploy an application in production using Docker.
==Build Context==


The Docker workflow largely consists in the following sequence:
{{Internal|Docker_build#The_Build_Context|Build Context}}


1. Developers build and test a [[#Container_Image|Docker image]] and ship it to the [[#Image_Registry|registry]].
==<span id='Exec_Form_and_Shell_Form'></span>Entrypoint==
2. Operations engineers provide configuration details and provision resources.
Both [[Dockerfile#ENTRYPOINT|ENTRYPOINT]] and [[Dockerfile#CMD|CMD]] directives support two different forms: the exec form and the shell form. When specifying the shell form, the binary is executed with an invocation of the shell using:
3. Developers trigger the deployment.
<syntaxhighlight lang='bash'>
/bin/sh -c
</syntaxhighlight>
For more details see: {{Internal|Dockerfile#CMD_vs._ENTRYPOINT|Dockerfile Reference - CMD vs. ENTRYPOINT}}


=Container=
=Image Repository=
{{External|https://docs.docker.com/docker-hub/repos/#creating-repositories}}


=Container Image=
A ''Docker image repository'' is a collection of different [[#Image|Docker images]] with same name, that have different [[#Tag|tags]].


A ''container image'' encapsulates all the dependencies of an application and configuration, and it can be deployed on any environment that has support for running containers. The same bundle can be assembled, tested and shipped to production without any change.
==Repository Name==


=Layered Image=
The repository name can be used as argument of the [[docker pull#Image/Repository_Name|docker pull]] command.


=Image Registry=
=Image Registry=


{{External|Docker Registry https://docs.docker.com/registry/}}
An ''image registry'' is a service for storing and retrieving Docker container [[#Image|images]] and contains a collection of one or more [[#Image_Repository|image repositories]]. Most image registries are hosted services. Clients interact with the registry using a ''registry API''. The default Docker registry, if Docker was not customized in this respect, is [[#Docker_Hub|Docker Hub]], and it shows as "https://index.docker.io/v1/" in [[docker info]]. The registries the Docker instance is configured with can be listed with [[docker info]]:


A ''Docker registry'' is a service that is storing [[#Container_Image|Docker images]] and metadata about those images.  
docker info
Examples:  
...
* https://hub.docker.com
Registry: https&#58;//registry.access.redhat.com/v1/
Insecure Registries:
  172.30.0.0/16
  127.0.0.0/8
Registries: registry.access.redhat.com (secure), registry.access.redhat.com (secure), docker.io (secure)
 
Other registries:
* https://quay.io
* https://quay.io
* https://cloud.google.com/container-registry/
* https://cloud.google.com/container-registry/
* [[OpenShift_Concepts#Image_Registries|OpenShift image registry]]


=Image Repository=
The docker server can be configured to look up images in arbitrary registries, block registries or allow insecure registries by using the [[Docker_Server_Configuration#--add-registry|--add-registry]], [[Docker_Server_Configuration#--block-registry|--block-registry]] and [[Docker_Server_Configuration#--insecure-registry|--insecure-registry]] options in the [[#Docker_Daemon|docker daemon]] configuration file. For more details see [[Docker_Server_Configuration#Overview|Docker Server Configuration]].
 
==Registry Authentication==
 
Protected registries require authentication prior interacting with them. To authenticate, execute:
 
  [[Docker login|docker login]]
 
==Registry Path==
 
A registry path is similar to a URL, but does not contain a protocol specifier (https&#58;//). A registry path can be used as image name prefix when attempting [[Docker_pull#Pull_from_a_Different_Registry|to pull form a different registry]] than [[#Docker_Hub|Docker Hub]]. Example:
 
registry.access.redhat.com/rhscl/postgresql-95-rhel7
 
==<span id='Local_Image_Registry'></span>Local Image Registry (Docker Registry)==
 
Docker can be used to run a local image registry. The implementation of the registry is provided by Distribution (formerly known as Registry).
 
{{Internal|Distribution_Registry#Overview|Distribution Registry}}
 
==Docker Hub==
 
Docker Hub is a cloud service that offers [[#Image_Registry|image registry]] functionality. It is useful for sharing application and automating workflows:
 
{{External|https://hub.docker.com}}
 
The [[docker search]] command searches Docker Hub (by default) for images whose name match the command argument.
 
More Docker Hub operations: {{Internal|Docker Hub Operations|Docker Hub Operations}}
 
Nova Ordis Images:
 
{{Internal|Nova_Ordis_Docker_Hub_Images|Nova Ordis Docker Hub Images}}
 
==Image Operations==
 
{{Internal|Docker Client Operations#Image_Operations|Image Operations}}
=Stack=
 
{{External|https://docs.docker.com/v17.12/docker-cloud/apps/stack-yaml-reference/}}
{{External|https://docs.docker.com/v17.12/docker-cloud/apps/stacks/}}
{{External|https://docs.docker.com/v17.12/docker-cloud/apps/stack-yaml-reference/}}
 
A stack is a collection of [[#Docker_Stack_Service|services]] that make up an application in a specific environment. Stacks are specified in stack files, which is a YAML file similar to [[Docker_Compose#docker-compose.yaml|docker-compose.yaml]] file. Stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately.
 
The CLI to manage stacks is [[docker stack]].
 
==Docker Stack Service==
 
{{External|https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/}}
 
A service is the definition of how an application container should be deployed and handled by an orchestrator: the services specified by a stack can be managed by an orchestrator (kubernetes, swarm). Docker Desktop comes with a built-in Kubernetes orchestrator.
 
At the most basic level a service defines which container image be run by the orchestrator and which commands to run in the container. For orchestration purposes, the service defines the “desired state”, meaning how many containers to run as tasks and constraints for deploying the containers. Frequently a service is a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.


A ''Docker repository'' is a collection of different [[#Container_Image|Docker images]] with same name, that have different tags.  
=Other Docker Concepts=
==<span id='Networking'></span><span id='Docker_Server_Networking'></span><span id='Container_Networking'></span>Docker Networking Concepts==
{{Internal|Docker Networking Concepts#Overview|Docker Networking Concepts}}
==<span id='Backends'></span><span id=Image_Storage'></span><span id='devicemapper-storage-driver'></span><span id='devicemapper_Storage_Driver'></span><span id='overlayfs-storage-driver'><span id='overlayfs_Storage_Driver'></span><span id='AUFS'></span><span id='BTRFS'></span><span id='Copy-on-Write_.28CoW.29_Strategy'></span><span id='Loopback_Storage'></span><span id='Non-Image_State_Storage'></span><span id='Mounted_Volumes'></span><span id='Docker_Volume'></span><span id='Data_Volume'></span><span id='Bind_Mount'></span><span id='Bind_Mounts_vs._Data_Volumes'></span><span id='Native_Host_Path_Permissions'></span><span id=UID.2FGID_Mapping'></span><span id='Docker_Storage'></span>Docker Storage Concepts==


=Tag=
Image storage, storage driver, storage backend, devicemapper, overlayfs, AUFS, BTRFS, Copy-on-Write (Cow) strategy, loopback storage, non-image state storage, data volume, bind mount, UID/GID mapping.


Tag is alphanumeric identifier of the [[#Container_Image|images]] within a repository.
{{Internal|Docker_Storage_Concepts#Overview|Docker Storage Concepts}}


=Dockerfile=
==Docker Security Concepts==


A Dockerfile defines how a container should look at build time.
{{Internal|Docker Security|Docker Security Concepts}}


=Docker and Virtualization=
==<span id='Controlling_CPU'></span><span id='CPU_Share_Constraint'></span><span id='CPU_Quota_Constraint'></span><span id='Resource_Management'></span>Docker Resource Management Concepts==
{{Internal|Docker Resource Management Concepts|Docker Resource Management Concepts}}


Containers implement virtualization above the O/S kernel level.  
==Docker Revision Control==
Docker provides two forms of revision control:
* Tracking the filesystem [[#Layered_Image|layers]] the images are made up.
* [[#Tag|Tagging]] for build containers.


In case of O/S virtualization, a virtual machine contains ''a complete operating system'' and runs ''its own kernel'', in top of the host operating system. The hypervisor that manages the VMs and the VMs use a percentage of the system's hardware resources, which are no longer available to the applications.
==Container Downward API==
{{Internal|Docker Container Downward API|Container Downward API}}


A container is just another process that interacts directly with the Linux kernel, and can utilize more resources that otherwise would have gone to hypervisor and the VM kernel. Both the host and the containers share ''the same'' kernel.
=<span id='Docker_Engine'></span><span id='Docker_Runtime'></span><span id='Docker_Components'></span>Docker Runtime (Docker Engine)=
{{External|https://docs.docker.com/engine/}}


=Cloud Platform=
The containerization technology for building and containerizing application is also known as Docker Engine. Docker Engine is a portable runtime and packaging tool. It acts as a client-server application with a [[#The_Docker_Server|server]] with a long-running daemon process, APIs and [[#The_Docker_Client|command line client]], <code>docker</code>.


Docker is not a cloud platorm. It only handles containers on pre-existing Docker hosts. It does not allow to create new hosts, object stores, block storage, and other resources that can be provisioned dynamically by a cloud platform.
==The Docker Client==
The ''Docker client'' is an executable used to control most of the [[#Docker_Workflow|Docker workflow]] and communicate with remote [[#The_Docker_Server|servers]]. The Docker client runs directly on most major operating systems. The same [[Go]] executable acts as both client and server, depending on how it is invoked. The client uses the [[#Remote_API|Remote API]] to communicate with the [[#The_Docker_Server|server]].
{{Internal|Docker Client Operations|Client Operations}}
===Docker Client Configuration===
====<span id='config.json'></span><tt>~/.docker/config.json</tt>====
<syntaxhighlight lang='json'>
{
  "auths": {
    "https://index.docker.io/v1/": {
      "username": "tiger",
      "password": "pass113",
      "email": "tiger@acme.com",
      "auth": "dGlnZXI6cGFzczExMw=="
    }
  }
}
</syntaxhighlight>
Replaces <code>[[#dockercfg|dockercfg]]</code>.
====<span id='dockercfg'></span><tt>~/.docker/dockercfg</tt>====
This is a legacy configuration file, replaced by <code>[[#config.json|~/.docker/config.json]]</code>.


=Boot2Docker=
==<span id='Docker_Daemon'></span>The Docker Server==
The ''Docker server'' (also referred as the ''Docker daemon'') is a process that runs as a daemon and manages the containers, and the [[#The_Docker_Client|client]] tells the server what to do. The server uses Linux containers and the underlying Linux kernel mechanisms ([[Linux cgroups|cgroups]], [[Linux Namespaces|namespaces]], [[iptables]], etc.), so it can only run on Linux servers. The same [[Go]] executable acts as both client and server, depending on how it is invoked, and it will launch as server only on supported Linux hosts. Each Docker host will normally have one Docker daemon that can manage a number of containers.


It is deprecated.
The server can talk directly to the [[#Image_Registry|image registries]] when instructed by the client.


=Docker Machine=
The server listens on 2375 for non-encrypted traffic and 2376 for encrypted traffic, and on the <tt>unix:///var/run/docker.sock</tt> [[Linux Unix Socket#Overview|Unix socket]].


=Security=
The Docker server maintains running (and stopped) containers state under /var/lib/docker/containers/<''container-id''>. The logs are /var/lib/docker/containers/<''container-id''>/<''container-id''>-json.log. More about logging available here: [[#Logging|Logging]].


* https://docs.docker.com/engine/security/security/
The daemon requires root privileges, so only trusted users should be allowed to control it.


=Dependencies=
{{Internal|Docker Server Operations|Server Operations}}


The [[#Docker_Workflow|Docker workflow]] allows all dependencies to be discovered during the development and test cycles.
==Client/Server Communication==
The client and server communicate over network ([[Linux_TCP/IP_Socket|TCP]] or [[Linux Unix Socket|Unix]]) sockets, via a REST API.  


=The Docker Client=
The server is always executed by "root". However, in most cases we want a different, non-privileged user to be able to run the client executable ("docker") and connect to the server. When the client and the server are collocated on the same host, the communication takes place over the  [[Linux Unix Socket|Unix socket]], and by default, the Unix socket the server is listening on (<tt>unix:///var/run/docker.sock</tt>) has restricted permissions:


The ''Docker client'' runs directly on most major operating systems. The same executable acts as both client and server, depending on how it is invoked.
ls -al /var/run/docker.sock
srw-rw---- 1 root docker 0 Apr 19 11:00 /var/run/docker.sock


=The Docker Server=
Thus, a completely random user won't be able to use the client binary to connect to the socket. To enable access, we must make the user part of the socket's owner group, which is by default "docker".


The ''Docker server'' uses Linux containers, so it can only run on Linux servers. The same executable acts as both client and server, depending on how it is invoked.
For details on how to secure the daemon access see: {{Internal|Secure Docker Daemon|Secure Docker Daemon}}


=cgroups=
For details on how to enable TCP access, {{Internal|Configure_Docker_Server_to_Listen_on_TCP|Configure Docker Server to Listen on TCP}}


{{Internal|Linux cgroups|cgroups}}
=Docker Workflow=
A Docker workflow represent the sequence of operations required to develop, test and deploy an application in production using Docker. The Docker workflow consists of the following sequence:
# Developers build and test a [[#Container_Image|Docker image]] and ship it to the [[#Image_Registry|registry]].
# Operations engineers provide configuration details and provision resources.
# Developers trigger the deployment.


=Namespaces=
=Docker Projects=
==<span id='Docker_on_Mac'></span>Docker Desktop on Mac==
{{Internal|Docker Desktop|Docker Desktop}}
==Docker Compose==
{{Internal|Docker Compose|Docker Compose}}
==Docker Swarm==
{{Internal|Docker Swarm|Docker Swarm}}
==Docker Machine==
More details: https://github.com/docker/machine
==Boot2Docker==
Deprecated.


{{Internal|Linux Namespaces|Namespaces}}
=Build Stage=
See: {{Internal|Dockerfile#FROM|<tt>FROM ... AS ...</tt>}}
=Miscellanea=
* <span id='Remote_API'></span>'''Remote API'''. https://docs.docker.com/engine/api/
* <span id='Atomic_Host'></span> An '''atomic host''' is a small, finely tuned operating system image like https://coreos.com or http://www.projectatomic.io, that supports container hosting and atomic OS upgrades.

Latest revision as of 20:44, 23 September 2024

External

Internal

Overview

Docker is at the same time a container image packaging format, a set of tools with server and client components, and a development and operations workflow. Because it defines a workflow, Docker can be seen as a tool that reduces the complexity of communication between the development and the operations teams. The ideal Docker application use cases are stateless applications or applications that externalize their state in databases or caches: web frontends, backend APIs and short running tasks.

Docker architecture centers on atomic and throwaway containers. During the deployment of a new version of an application, the whole runtime environment of the old version of the application is thrown away with it, including dependencies, configuration, all the way to, but excluding the O/S kernel. This means the new version of the application won't accidentally use artifacts left by the previous release, and the ephemeral debugging changes performed inside the container, if any, will not survive. This approach also makes the application portable between host, which act as places where to dock standardized containers. The only requirement of a container from a host is a kernel that supports containers. The Linux kernel (see "Architecture" below) has provided support for container technologies for years, but more recently the Docker project has developed a convenient management interface for containers on a host.

A Docker release artifact is a single file, whose format is standardized. It consists of a set of layers assembled in an image.

Docker is not a cloud platform. It only handles containers on pre-existing Docker hosts. It does not allow to create new hosts, object stores, block storage, and other resources that can be provisioned dynamically by a cloud platform.

Architecture

Containers require several kernel-level mechanisms to be available to work correctly:

  • Process isolation is provided by the kernel namespaces mechanism. By default, all containers have PID Namespace, UTS Namespace enabled.
  • Capability to control container's access to the system resources is provided by the croups mechanism. For each container, one cgroup is created in each hierarchy. The cgroup is "lxc/<container-name>".
  • Security that comes from separation between the host and the container, and between individual containers is enforced with SELinux.

Container

A Linux container is a lightweight mechanism for isolating running processes, so these processes interact only with designated resources. The primary aim of containers is to make programs easy to deploy in a way that does not cause them to break.

The process tree runs in a segregated environment provided by the operating system, with restricted access to these resources, and the container allows the administrator to monitor resource usage. Inbound or outbound external access is done via a virtual network adapter. From an application's perspective, it looks like the application is running alone inside its own O/S installation. An image encapsulates all files required to run an application - all the dependencies of an application and its configuration - and it can be deployed on any environment that has support for running containers. The same bundle can be assembled, tested and shipped to production without any change. From this perspective, container images are a packaging technology.

Multiple applications can be run in containers on the same host, and each application won't have visibility into other applications' processes, files, network, etc. Typically, each container provides a single service, often called a microservice. While it is technically possible to run multiple services within a container, this is generally not considered a best practice: the fact that a container provides a single functions makes it theoretically easy to scale horizontally.

A Docker container is a Linux container that has been instantiated from a Docker image. Physically, the Docker container is a reference to a layered filesystem image and some configuration metadata (environment variables, for example). The detailed information that goes along with a container can be displayed with docker inspect.

Containers and Virtualization

Containers implement virtualization above the O/S kernel level.

In case of O/S virtualization, a virtual machine contains a complete guest operating system and runs its own kernel, in top of the host operating system. The hypervisor that manages the VMs and the VMs use a percentage of the system's hardware resources, which are no longer available to the applications.

A container is just another process, with a lightweight wrapper around it, that interacts directly with the Linux kernel, and can utilize more resources that otherwise would have gone to hypervisor and the VM kernel. The container includes only the application and its dependencies. It runs as an isolated process in user space, on the host's operating system. The host and all containers share the same kernel.

A virtual machine is long lived in nature. Containers have usually shorter life spans.

The isolation among containers is much more limited however than the isolation among virtual machines. A virtual machine has default hard limits on hardware resources it can use. Unless configured otherwise, by placing explicit limits on resources containers can use, they compete for resources.

Container Metadata

Image and Container Metadata

Container ID

The long value can be obtained with:

docker inspect --format="{{.Id}}" <''short-container-ID''>|<''container-name''>

The Name of the Image a Container is Created From

The name of an image the container was instantiated from can be obtained by running docker ps. The image name is found in the "IMAGE" column.

Difference Between Containers and Images - a Writable Layer

Once instantiated, a container represents the runtime instance of the image it was instantiated from. The difference between the image and a container instantiated from it consists of an extra writable layer, which is added on top of the topmost layer of the image. This layer is often called the "container layer". All activity inside the container that adds new data or modifies existing data - writing new files, modifying existing files or deleting files - will result in changes being stored in the writable layer. Any files the container does not change do not get copied in the writable layer, which means the writable layer is kept as small as possible. When an existing file is modified, the storage driver performs a copy-on-write operation.

The state of this writable layer can be inspected at runtime by logging into the container, or it can be exported with docker export and inspected offline. Because each container has its own writable container layer, which store the changes that are particular to a specific container, multiple containers can share access to the same underlying image and yet maintain their own state. If multiple containers must share access to the same state, it should be done by storing the data in a volume mounted in all the containers. Volumes should also be used for write-heavy application, which should not store data in the container.

When the container is stopped with docker stop, the writable layer's state is preserved, so when the container is restarted with docker start, the runtime container regains access to it. When the container is deleted with docker rm, the writable layer is discarded so all the changes to the image are lost, but the underlying image remains unchanged.

The size of the writable layer is reported as "size" by docker ps -s.

Container Root Filesystem Size

https://docs.docker.com/engine/reference/commandline/dockerd/#storage-driver-options

At runtime, the container root file system is stored on a base device, which limits the size of the root file system. The default value is 10GB. The device size can be increased at daemon restart which will allow all future images and containers (based on those new images) to be of the new base device size:

dockerd --storage-opt dm.basesize=50G [...]

stdin/stdout/stderr Interaction with a Container

https://docs.docker.com/engine/reference/run/#detached-vs-foreground

A container can run in foreground mode or in detached (background) mode. By default, a container starts in foreground mode. While in foreground or detached mode, the container may or may not be in interactive mode.

Foreground Mode

https://docs.docker.com/engine/reference/run/#foreground

A container starts in foreground mode by default, if no argument is provided to the docker run command. In foreground mode, the Docker runtime attaches the container root process' stdout and stderr to the stdout and stderr of the shell that invokes the docker run command, so anything produced by the root process at stdout and stderr is immediately visible in the controlling terminal.

docker run <image>

The -a|--attach docker run option allows specifying which individual stream (stdin, stdout, stderr) to attach, so the default behavior is equivalent with:

docker run -a stdout -a stderr <image>

Foreground mode does not imply that the root process keeps its stdin open, nor that the stdout of the controlling terminal is attached to it. To send content into the root process via its stdin, the container must be started in interactive mode.

Note that if the controlling terminal stdout is attached to the container root process' stdin with -a stdin, but the container is not started in interactive mode, the content sent by the controlling terminal does not propagate to the root process, because its stdin is not open.

Interactive Mode

A container is started in interactive mode if its root process keeps stdin open after startup.

A container can be started in interactive mode both in foreground and detached mode. For a container started in foreground and interactive mode, the stdin of the root process will be immediately attached to the stdout of the current shell, so anything typed into the shell will we forwarded to the stdin of the process:

docker run -i <image>

Note that, unless -i|--interactive is specified, a container is started by default in non-interactive mode, so the stdin of the container process is immediately closed. Also note that interactive mode does not necessarily imply that the root process is associated with a TTY device. The root process of the container will be associated with a TTY device if the container was explicitly started with this option. For more details on TTY devices, see Association with a TTY Device.

Association with a TTY Device

By default, the root process of a container is not associated with any TTY device.

However, the process can be associated with a TTY device if the -t|--tty option used at startup. This is necessary for shell interaction with the container, where interactive commands are sent into the container and the output of the container process is needed in the terminal.

If a TTY device is associated with the container and the container starts in foreground mode, no new TTY device needs to be allocated, the container root process will be associated with the same TTY device as the controlling shell. If the container is started in detached mode, a new TTY device will be allocated and attached to the container root process.

The association with a TTY device is enabled by:

docker run -t|--tty ...

Detached (Background) Mode

The detached mode is characterized by the fact that the stdin, stdout and stderr of the container's root process are disconnected from the process running the docker command that launches the container, so the detached container cannot be interacted with via stdin/stdout/stdout. Interaction with a detached container can only be done via network or volumes.

To start a container in the detached mode, use the -d docker run option:

> docker run -d|--detach <image>
dcb09d297c4aa0bf2144a1fa16c948bb68622321955d820d1c3f2543f6c9147d
>

The container ID will be displayed by the shell, which will continue to interpret commands as usual. Containers started in detached mode exit when the root process exits.

It is possible to start the container in detached and interactive mode. In this case the container root process' stdin will stay open, and it could be later attached to with docker attach for as long as it is running. However, the command shell stdout will not be attached to the container root process' stdin, so commands typed into the current shell will continue to be interpreted as usual.

-a|--attach and -d|--detach are mutually exclusive.

Container Lifecycle

Container Execution Sequence

Once docker run is executed, the following sequence takes place:

  • The Docker server checks whether the image to be run is available in the local image registry.
  • If the image is not available in the local image registry, the docker server contacts its configured remote registries and attempts to download the image from the. If the image is found, it is downloaded and cached locally in the local image registry.
  • The Docker server creates a set of namespaces and control groups for the container.
  • The Docker server allocates and mounts a read-write layer. This layer will become the container's writable layer.
  • The Docker server allocates the virtual network interface that will be used by the container to connect to the server's networking system.
  • The networking system allocates an IP address for the virtual network interface.
  • The process specified by the ENTRYPOINT/CMD combination is executed.
  • The Docker server connects and logs stdin/stdout/stderr depending on the run command configuration, specifically the presence of the -i|--interactive, -d|--detach and -t|--tty options.
  • During its execution, the process may create children processes, which execute within the same container. However, the life of the container is controlled by the life of its root process, that has PID 1.
  • The container will exit when the main thread of the root process terminates. For more details see Container Exit.

Container Exit

The root process of a container runs as PID 1.

A container usually exits when the main thread of its root process terminates, irrespective on whether it was started in interactive or non-interactive mode, detached or non-detached mode, or it has a TTY device associated with it. If the main process terminates, the entire container is stopped, killing any child processes you may have launched from your PID 1 process.

All containers on a Docker server will be forcibly terminated if the Docker server exits. They can be configured to restart automatically on Docker server restart.

Restart Policy

The restart policy refers to the behavior of the Docker server when a specific container exits. It can be configured when the container is started with docker run, or in hostconfig.json. Possible options: "no", "always".

Logging

https://docs.docker.com/engine/admin/logging/overview/#none
https://docs.docker.com/config/containers/logging/configure/

Container logging consists in content sent to the stdout and stderr by the process (processes) running within the container.

By default, the logging information gets translated into JSON records and written on the docker server files system in /var/lib/docker/containers/container-id/container-id-json.log and it cannot be accessed with docker logs.

Configuration

The container configuration can be accessed with docker inspect and it can be edited with docker update. It is also available on the docker server under /var/lib/docker/containers/<container-id>. More details about specific files and fields:

Docker Container Configuration

Pause Container

https://www.ianlewis.org/en/almighty-pause-container

A pause container is a container responsible with holding the network namespace, creating shared network, assigning IP addresses, etc. for a set of other internal containers. It is how pods are implemented in Kubernetes. Normally if the last process in a network namespace dies, the namespace would be destroyed. A pause container avoids that while the internal container can be killed and restarted.

Containers and Environment Variables

Containerized applications must avoid maintaining configuration in filesystem files - if they do, it limits the reusability of the container. A common pattern used to handle application configuration is to move configuration state into environment variables that can be passed to the application from the container. Docker supports environment variables natively, they are stored in the metadata that makes up a container configuration, and restarting the container will ensure the same configuration is passed to the application each time.

Container Best Practices

Container Best Practices

Image

Logically, a Docker image is a set of stacked layers, where each layer represents the result of the execution of a Dockerfile instruction. Each layer, except for the last one, the container layer, is read-only, and it only contains differences from the layer before it. The details related to how these layers interact with each other are handled by the storage driver. Physically, a Docker image is a configuration objects, or a manifest, which specifies in JSON format, among other things, an ordered list of layer digests, which enables docker to assemble a container's filesystem with reference to layer digests rather than parent images:

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "size": 32501,
      "digest": "sha256:8...e"
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 39618920,
         "digest": "sha256:5...d"
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 1747,
         "digest": "sha256:4...0"
      },
      [...]
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 2238,
         "digest": "sha256:8...3"
      }
   ]
}

For differences between an image and a container, see Difference Between Containers and Images above.

The image is produced by the build command, as the sole artifact of the build process. When an image needs to be rebuilt, every single layer after the first introduced change will need to be rebuilt.

The space occupied on disk by a container can be estimated based on the output of the docker ps -s command, which provides size and virtual size information. For accounting of the space occupied by container logging, which may be non-trivial, see logging.

Images are stored and accessed by the cryptographic checksum of their contents (the image ID).

Image Metadata

Each image has an associated JSON structure which describes the image. The metadata includes creation date, author, the ID of the parent image, execution/runtime configuration like its entry point, default arguments, CPU/memory shares, networking, and volumes. The JSON structure also references a cryptographic hash of each layer used by the image, and provides history information for those layers. This JSON structure is considered to be immutable, because changing it would change the computed ImageID. Changing it means creating a new derived image, instead of changing the existing image.

Image and Container Metadata

Image ID

The image ID is a digest calculated by applying the SHA256 algorithm to the image metadata, which, among other things, contains an ordered list of layer digests. The content that goes into calculating the digest can be examined with docker inspect. The first 12 digits of the image ID is displayed as "IMAGE ID" by the docker images command.

Image Name

The image name can be used as argument of the docker pull command.

Label

Labels represent metadata in the form of key/value pairs, and they can be specified with the Dockerfile LABEL command. Labels can be applied to containers and images and they are useful in identifying and searching Docker images and containers. Labels applied to an image can be retrieved with docker inspect command.

Base Image

https://docs.docker.com/engine/userguide/eng-image/baseimages/

When a container is assembled from a Dockerfile, the initial image upon which layers are being added is called the base image. A base image has no parents. The base image is specified by the Dockerfile FROM instruction. Once a base image was used to create a new image with docker build, it becomes the parent image of the newly created image.

This is an article advising on base images to use: https://www.brianchristner.io/docker-image-base-os-size-comparison/. Base images used so far:

Parent Image

An image’s parent image is the image designated in the FROM directive in the image’s Dockerfile. All subsequent commands are applied to this parent image. A Dockerfile with no FROM directive has no parent image, and is called a base image. The parent image ID can be obtained from the image metadata with docker inspect.

Searching for Images

The Docker client command docker search can be used to search for images in Docker Hub or other repositories.

Layer

A layer of a Docker image represents the result of the execution of a Dockerfile instruction. Each layer is identified by an unique long hexadecimal number named hash. The hash is usually shortened to 12 digits. Each layer is stored in its own local directory inside Docker's local image registry (however the directory names do not correspond to the layer IDs). The layers are version controlled.

Tag

A tag is an alphanumeric identifier of the images within a repository, and it is generally used to identify a particular release of the image. It is a form of Docker revision control. Tags are needed because application develop over time, and a single image name can actually refer to many different versions of the same image. An image is uniquely identified by its hash and possibly by one or several tags. An image may be tagged in the local registry when the image is first built, using the -t option of the "docker build" command, or with the docker tag command. An image may have multiple tags. For example, the "latest" tag may be associated with a specific version tag.

A tag name must be valid ASCII and may contain lowercase and uppercase letters, digits, underscores, periods and dashes. A tag name may not start with a period or a dash and may contain a maximum of 128 characters.

See:

docker tag

The "latest" Tag

If the docker pull command is used without any explicitly specified tag, "latest" is implied. However, the "latest" tag must exist in the repository on the registry being accessed, for the command to work.

“latest” simply means “the last build/tag that ran without a specific tag/version specified”. For more on this, see The misunderstood Docker tag: latest.

Docker Tag, Containers and Kubernetes Pods

Docker Tag, Containers and Kubernetes Pods

URL

A repository URL. The most generic format is:

[registry][:port][/namespace/]<repository>[:tag]

In not specified, the default registry is "docker.io", the namespace section is "/library/" and the default tag is "latest". More details about "latest".

Union Filesystem

Docker uses a union filesystem to combine all layers within an image into a single coherent filesystem.

Dependencies

The Docker workflow allows all dependencies to be discovered during the development and test cycles.

Dangling Image

An image is said to be "dangling" if it is not associated with a repository name in a registry, usually the local registry:

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
<none>                               <none>              0c0359fd3c0d        8 seconds ago       1.14MB

Image Building

Builder Pattern

https://blog.alexellis.io/mutli-stage-docker-builds/

The practice of maintaining one Dockerfile for development and a corresponding Dockerfile for production. The development Dockerfile contains the tools and libraries needed to build the application. The production Dockerfile is a slimmed-down version of the development Dockerfile, which only contains the application artifacts and exactly what is needed to run it. However, maintaining two related Dockerfile is not ideal. An alternative is to use a multi-stage build.

Build Cache

Docker Build Cache

Multi-Stage Build

https://docs.docker.com/engine/userguide/eng-image/multistage-build/
https://blog.alexellis.io/mutli-stage-docker-builds/

A more efficient replacement for the builder pattern.

A multi-stage build has two advantages: it avoids placing tools and unneeded files in the final image, and generates smaller images.

The general syntax involves adding FROM additional times within the Dockerfile and naming build stages. Whichever is the last FROM statement is the final base image. To copy artifacts and outputs from intermediate images use COPY --from=<base_image_name>:

FROM something AS my_builld

# This results in a single layer image
FROM alpine:latest  
COPY --from=my_builld  ...

Also see:

docker build - Multi-Stage Build

Best Practices for Creating Images

Docker Container Best Practices | Best Practices for Creating Images

Multi-Architecture Container Image

Multi-Architecture Container Images

scratch

An empty Docker image with no operating system files.

Context

https://docs.docker.com/engine/context/working-with-contexts

A Docker context contains all information required to manage resources on a Docker daemon. This information includes: name and description of the context, the Docker daemon endpoint configuration and TLS info. The docker context command can be used to manage the Docker contexts.

docker context list

Dockerfile

A Dockerfile defines how a container should look at build time, and it contains all the steps that are required to create an layered image. Each command in the Dockerfile generates a new layer in the image. The Dockerfile is an argument, possibly implicit, if present in the directory the command is run from, of the build command. For more details, see:

Dockerfile

Docker Image DSL

Docker defines its own Domain Specific Language (DSL) for creating Docker images.

https://docs.docker.com/engine/reference/builder/

.dockerignore

.dockerignore

Build Context

Build Context

Entrypoint

Both ENTRYPOINT and CMD directives support two different forms: the exec form and the shell form. When specifying the shell form, the binary is executed with an invocation of the shell using:

/bin/sh -c

For more details see:

Dockerfile Reference - CMD vs. ENTRYPOINT

Image Repository

https://docs.docker.com/docker-hub/repos/#creating-repositories

A Docker image repository is a collection of different Docker images with same name, that have different tags.

Repository Name

The repository name can be used as argument of the docker pull command.

Image Registry

An image registry is a service for storing and retrieving Docker container images and contains a collection of one or more image repositories. Most image registries are hosted services. Clients interact with the registry using a registry API. The default Docker registry, if Docker was not customized in this respect, is Docker Hub, and it shows as "https://index.docker.io/v1/" in docker info. The registries the Docker instance is configured with can be listed with docker info:

docker info
...
Registry: https://registry.access.redhat.com/v1/
Insecure Registries:
 172.30.0.0/16
 127.0.0.0/8
Registries: registry.access.redhat.com (secure), registry.access.redhat.com (secure), docker.io (secure)

Other registries:

The docker server can be configured to look up images in arbitrary registries, block registries or allow insecure registries by using the --add-registry, --block-registry and --insecure-registry options in the docker daemon configuration file. For more details see Docker Server Configuration.

Registry Authentication

Protected registries require authentication prior interacting with them. To authenticate, execute:

 docker login

Registry Path

A registry path is similar to a URL, but does not contain a protocol specifier (https://). A registry path can be used as image name prefix when attempting to pull form a different registry than Docker Hub. Example:

registry.access.redhat.com/rhscl/postgresql-95-rhel7

Local Image Registry (Docker Registry)

Docker can be used to run a local image registry. The implementation of the registry is provided by Distribution (formerly known as Registry).

Distribution Registry

Docker Hub

Docker Hub is a cloud service that offers image registry functionality. It is useful for sharing application and automating workflows:

https://hub.docker.com

The docker search command searches Docker Hub (by default) for images whose name match the command argument.

More Docker Hub operations:

Docker Hub Operations

Nova Ordis Images:

Nova Ordis Docker Hub Images

Image Operations

Image Operations

Stack

https://docs.docker.com/v17.12/docker-cloud/apps/stack-yaml-reference/
https://docs.docker.com/v17.12/docker-cloud/apps/stacks/
https://docs.docker.com/v17.12/docker-cloud/apps/stack-yaml-reference/

A stack is a collection of services that make up an application in a specific environment. Stacks are specified in stack files, which is a YAML file similar to docker-compose.yaml file. Stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately.

The CLI to manage stacks is docker stack.

Docker Stack Service

https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/

A service is the definition of how an application container should be deployed and handled by an orchestrator: the services specified by a stack can be managed by an orchestrator (kubernetes, swarm). Docker Desktop comes with a built-in Kubernetes orchestrator.

At the most basic level a service defines which container image be run by the orchestrator and which commands to run in the container. For orchestration purposes, the service defines the “desired state”, meaning how many containers to run as tasks and constraints for deploying the containers. Frequently a service is a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.

Other Docker Concepts

Docker Networking Concepts

Docker Networking Concepts

Docker Storage Concepts

Image storage, storage driver, storage backend, devicemapper, overlayfs, AUFS, BTRFS, Copy-on-Write (Cow) strategy, loopback storage, non-image state storage, data volume, bind mount, UID/GID mapping.

Docker Storage Concepts

Docker Security Concepts

Docker Security Concepts

Docker Resource Management Concepts

Docker Resource Management Concepts

Docker Revision Control

Docker provides two forms of revision control:

  • Tracking the filesystem layers the images are made up.
  • Tagging for build containers.

Container Downward API

Container Downward API

Docker Runtime (Docker Engine)

https://docs.docker.com/engine/

The containerization technology for building and containerizing application is also known as Docker Engine. Docker Engine is a portable runtime and packaging tool. It acts as a client-server application with a server with a long-running daemon process, APIs and command line client, docker.

The Docker Client

The Docker client is an executable used to control most of the Docker workflow and communicate with remote servers. The Docker client runs directly on most major operating systems. The same Go executable acts as both client and server, depending on how it is invoked. The client uses the Remote API to communicate with the server.

Client Operations

Docker Client Configuration

~/.docker/config.json

{
  "auths": {
    "https://index.docker.io/v1/": {
      "username": "tiger",
      "password": "pass113",
      "email": "tiger@acme.com",
      "auth": "dGlnZXI6cGFzczExMw=="
    }
  }
}

Replaces dockercfg.

~/.docker/dockercfg

This is a legacy configuration file, replaced by ~/.docker/config.json.

The Docker Server

The Docker server (also referred as the Docker daemon) is a process that runs as a daemon and manages the containers, and the client tells the server what to do. The server uses Linux containers and the underlying Linux kernel mechanisms (cgroups, namespaces, iptables, etc.), so it can only run on Linux servers. The same Go executable acts as both client and server, depending on how it is invoked, and it will launch as server only on supported Linux hosts. Each Docker host will normally have one Docker daemon that can manage a number of containers.

The server can talk directly to the image registries when instructed by the client.

The server listens on 2375 for non-encrypted traffic and 2376 for encrypted traffic, and on the unix:///var/run/docker.sock Unix socket.

The Docker server maintains running (and stopped) containers state under /var/lib/docker/containers/<container-id>. The logs are /var/lib/docker/containers/<container-id>/<container-id>-json.log. More about logging available here: Logging.

The daemon requires root privileges, so only trusted users should be allowed to control it.

Server Operations

Client/Server Communication

The client and server communicate over network (TCP or Unix) sockets, via a REST API.

The server is always executed by "root". However, in most cases we want a different, non-privileged user to be able to run the client executable ("docker") and connect to the server. When the client and the server are collocated on the same host, the communication takes place over the Unix socket, and by default, the Unix socket the server is listening on (unix:///var/run/docker.sock) has restricted permissions:

ls -al /var/run/docker.sock
srw-rw---- 1 root docker 0 Apr 19 11:00 /var/run/docker.sock

Thus, a completely random user won't be able to use the client binary to connect to the socket. To enable access, we must make the user part of the socket's owner group, which is by default "docker".

For details on how to secure the daemon access see:

Secure Docker Daemon

For details on how to enable TCP access,

Configure Docker Server to Listen on TCP

Docker Workflow

A Docker workflow represent the sequence of operations required to develop, test and deploy an application in production using Docker. The Docker workflow consists of the following sequence:

  1. Developers build and test a Docker image and ship it to the registry.
  2. Operations engineers provide configuration details and provision resources.
  3. Developers trigger the deployment.

Docker Projects

Docker Desktop on Mac

Docker Desktop

Docker Compose

Docker Compose

Docker Swarm

Docker Swarm

Docker Machine

More details: https://github.com/docker/machine

Boot2Docker

Deprecated.

Build Stage

See:

FROM ... AS ...

Miscellanea