Kubernetes Pod and Container Concepts: Difference between revisions
Line 19: | Line 19: | ||
==Shared Context== | ==Shared Context== | ||
The containers in a pod share network resources and storage, in form of filesystem volumes. From this perspective, a pod can be thought of as an application-specific '''logical host''' with all its processes (containers) sharing the network stack and the storage available to the host. In a pre-container world, these processes would have run on the same physical or virtual host. In line with this analogy, the pod cannot span hosts.The pod's containers are relatively tightly coupled and run within the '''shared context''' provided by the pod. The shared context of a pod is a set of [[Linux Namespaces|Linux namespaces]] and [[Linux cgroups|cgroups]]. Within a pod's contexts, individual containers may have further sub-isolations applied. | The containers in a pod share network resources and storage, in form of filesystem volumes. From this perspective, a pod can be thought of as an application-specific '''logical host''' with all its processes (containers) sharing the network stack and the storage available to the host. In a pre-container world, these processes would have run on the same physical or virtual host. In line with this analogy, the pod cannot span hosts.The pod's containers are relatively tightly coupled and run within the '''shared context''' provided by the pod. The shared context of a pod is a set of [[Linux Namespaces|Linux namespaces]] and [[Linux cgroups|cgroups]]. Within a pod's contexts, individual containers may have further sub-isolations applied. | ||
==Single-Container Pods vs. Multi-Container Pods== | |||
The most common case is to declare a single container in a pod. There are advanced use cases - for example, service meshes - that require running multiple containers inside a pod. Containers share a pod when they execute tightly-coupled workloads, provide complementary functionality and need to share resources. Configuring two or more containers in the same pod guarantees that the containers will be run [[#All_Containers_of_a_Pod_are_Scheduled_on_the_Same_Node|on the same node]]. Some commonly accepted use cases for collocated containers are service meshes and logging. | |||
Each container of a multi-container pod can be exposed externally on its individual port. The containers share the pod's [[#Network_Namespace|network namespace]], thus the TCP and UDP port ranges. | |||
==Pod Lifecycle== | ==Pod Lifecycle== |
Revision as of 20:43, 24 September 2021
External
- https://kubernetes.io/docs/concepts/workloads/pods/ (fully synced ✓)
Internal
Overview
A pod is the fundamental, atomic compute unit created and managed by Kubernetes. An application is deployed as one or more equivalent pods. There are various strategies to partition applications to pods. A pod groups together one or more containers. There are several types of containers: application containers, init containers and ephemeral containers. Pods are deployed on worker nodes. A pod has a well-defined lifecycle with several phases, and the pod's containers can only be in one of a well-defined number of states. Kubernetes learns of what happens with a container by container probes.
Pod
A pod is a group of one or more containers Kubernetes deploys and manages a compute unit, and the specification for how to run the containers. Kubernetes will not manage compute entities with smaller granularity, such as containers or processes. The containers of a pod are atomically deployed and managed as a group. A useful mental model when thinking of a pod is that of a logical host, where all its containers share a context.
Pod Operation Atomicity
Atomic Success or Failure
The deployment of a pod is an atomic operation. This means that a pod is either entirely deployed, with all its containers co-located on the same node, or not deployed at all. There will never be a situation where a partially deployed pod will be servicing application requests.
All Containers of a Pod are Scheduled on the Same Node
A pod can be scheduled on one node and one node only - regardless of many containers the pod has. All containers in the pod will be always co-located and co-scheduled on the same node. Only when all pod resources are ready the pod becomes available and application traffic is directed to it.
The containers in a pod share network resources and storage, in form of filesystem volumes. From this perspective, a pod can be thought of as an application-specific logical host with all its processes (containers) sharing the network stack and the storage available to the host. In a pre-container world, these processes would have run on the same physical or virtual host. In line with this analogy, the pod cannot span hosts.The pod's containers are relatively tightly coupled and run within the shared context provided by the pod. The shared context of a pod is a set of Linux namespaces and cgroups. Within a pod's contexts, individual containers may have further sub-isolations applied.
Single-Container Pods vs. Multi-Container Pods
The most common case is to declare a single container in a pod. There are advanced use cases - for example, service meshes - that require running multiple containers inside a pod. Containers share a pod when they execute tightly-coupled workloads, provide complementary functionality and need to share resources. Configuring two or more containers in the same pod guarantees that the containers will be run on the same node. Some commonly accepted use cases for collocated containers are service meshes and logging.
Each container of a multi-container pod can be exposed externally on its individual port. The containers share the pod's network namespace, thus the TCP and UDP port ranges.
Pod Lifecycle
Pod Phases
Pods and Nodes
Pods and Containers
Container
TODO:
Container Types
Application Container
Init Container
Ephemeral Container
Container States
Container Probes
Summary of a relationship between container probe result and overall pod situation.