Docker Concepts: Difference between revisions
Line 59: | Line 59: | ||
In case of O/S virtualization, a virtual machine contains ''a complete operating system'' and runs ''its own kernel'', in top of the host operating system. The hypervisor that manages the VMs and the VMs use a percentage of the system's hardware resources, which are no longer available to the applications. | In case of O/S virtualization, a virtual machine contains ''a complete operating system'' and runs ''its own kernel'', in top of the host operating system. The hypervisor that manages the VMs and the VMs use a percentage of the system's hardware resources, which are no longer available to the applications. | ||
A container is just another process that interacts directly with the Linux kernel, and can utilize more resources that otherwise would have gone to hypervisor and the VM kernel. Both the host and the containers share ''the same'' kernel. | A container is just another process, with a lightweight wrapper around it, that interacts directly with the Linux kernel, and can utilize more resources that otherwise would have gone to hypervisor and the VM kernel. Both the host and the containers share ''the same'' kernel. | ||
=Cloud Platform= | =Cloud Platform= |
Revision as of 14:00, 30 March 2017
Internal
Overview
Docker is at the same time a packaging format, a set of tools with server and client components, and a development and operations workflow. Because it defines a workflow, Docker can be seen as a tool that reduces the complexity of communication between the development and the operations teams.
Docker architecture centers on atomic and throwaway containers. During the deployment of a new version of an application, the whole runtime environment of the old version of the application is thrown away with it, including dependencies, configuration, all the way to, but excluding the O/S kernel. This means the new version of the application won't accidentally use artifacts left by the previous release, and the ephemeral debugging changes are not going to survive. This approach also makes the application portable between servers, which act as places where to dock standardized containers.
A Docker release artifact is a single file, whose format is standardized. It consists of a set of layered images.
The ideal Docker application use cases are stateless applications or applications that externalize their state in databases or caches: web frontends, backend APIs and short running tasks.
Docker Workflow
A Docker workflow represent the sequence of operations required to develop, test and deploy an application in production using Docker.
The Docker workflow largely consists in the following sequence:
1. Developers build and test a Docker image and ship it to the registry. 2. Operations engineers provide configuration details and provision resources. 3. Developers trigger the deployment.
Container
Container Image
A container image encapsulates all the dependencies of an application and configuration, and it can be deployed on any environment that has support for running containers. The same bundle can be assembled, tested and shipped to production without any change.
Layered Image
Image Registry
- Docker Registry https://docs.docker.com/registry/
A Docker registry is a service that is storing Docker images and metadata about those images. Examples:
Image Repository
A Docker repository is a collection of different Docker images with same name, that have different tags.
Tag
Tag is alphanumeric identifier of the images within a repository.
Dockerfile
A Dockerfile defines how a container should look at build time.
Docker and Virtualization
Containers implement virtualization above the O/S kernel level.
In case of O/S virtualization, a virtual machine contains a complete operating system and runs its own kernel, in top of the host operating system. The hypervisor that manages the VMs and the VMs use a percentage of the system's hardware resources, which are no longer available to the applications.
A container is just another process, with a lightweight wrapper around it, that interacts directly with the Linux kernel, and can utilize more resources that otherwise would have gone to hypervisor and the VM kernel. Both the host and the containers share the same kernel.
Cloud Platform
Docker is not a cloud platorm. It only handles containers on pre-existing Docker hosts. It does not allow to create new hosts, object stores, block storage, and other resources that can be provisioned dynamically by a cloud platform.
Security
Dependencies
The Docker workflow allows all dependencies to be discovered during the development and test cycles.
The Docker Client
The Docker client runs directly on most major operating systems. The same Go executable acts as both client and server, depending on how it is invoked. The client uses the Remote API to communicate with the server.
The Docker Server
The Docker server is a process that runs as a daemon and manages the containers, and the client tells the server what to do. The server uses Linux containers and the underlying Linux kernel mechanisms (cgroups, namespaces, iptables, etc.), so it can only run on Linux servers. The same Go executable acts as both client and server, depending on how it is invoked, and it will launch as server only on supported Linux hosts. Each Docker host will normally have one Docker daemon that can manage a number of containers.
The server can talk directly to the image registries when instructed by the client.
The server listens on 2375 for non-encrypted traffic and 2376 for encrypted traffic.
Client/Server Communication
The client and server communicate over network (TCP or Unix) sockets.
Remote API
cgroups
Namespaces
Container Networking
A Docker container behaves like a host on a private network. Each container has its own virtual Ethernet interface and its own IP address. All containers managed by the same server are on a default virtual network together and can talk to each other directly. In order to get to the host and the outside world, the traffic from the containers goes over an interface called docker0: the Docker server acts as a virtual bridge for outbound traffic. The Docker server also allows containers to "bind" to ports on the host, so outside traffic can reach them: the traffic passes over a proxy that is part of the Docker server before getting to containers.
The default mode can be changed, for example --net configures the server to allow containers to use the host's own network device and address.
Docker Projects
Boot2Docker
It is deprecated.