Kubernetes Concepts: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 15: Line 15:
=Overview=
=Overview=


Kubernetes is a container orchestrator. To understand how it works is to understand a set of high level concepts, briefly mentioned here. More details on individual concepts are available on their respective pages. All interactions with a Kubernetes cluster are performed by sending REST requests into an API server. The API server is responsible with managing and exposing the state of the cluster. The cluster consists in a set of nodes. Nodes are used to run pods - pods are scheduled to nodes and monitored closely by the cluster. A pod is the atomic unit of deployment in Kubernetes, and contains one or more containers. Pods come and go - if one pod dies, it is not resurrected, but another pod might be scheduled as replacement. In consequence, the IP address of an individual pod cannot be relied on. To provide a stable access point to a set of equivalent pods - which is how applications are deployed on Kubernetes, Kubernetes uses the concept of service, which can be thought of as stable networking for a changing set of pods. A service's IP address and port can be relied on to be stable for the life of the service. All live pods represented by a service at a moment in time are known as the service's endpoint. There are several types of services: ClusterIP, NodePort and LoadBalancer. The association between services and pods they expose is loose, established logically by the service's selector, which is a label-based mechanism: a pod "belongs" to a service if the service's selector matches the pod's labels. A pod by itself has no built-in resilience: if it fails for any reason, it is gone. A higher level concept - the deployment - is used to manage a set of pods from a high availability perspective: the deployment insures that a specific number of equivalent pods is always running, and if one of more pods fail, the deployment brings up other pods. The deployment relies on an intermediary concept - the replica set. Deployments are used to implement rolling updates and rollback of pods.
Kubernetes is a container orchestrator. To understand how it works is to understand a set of high level concepts, briefly mentioned here. More details on individual concepts are available on their respective pages. All interactions with a Kubernetes cluster are performed by sending REST requests into an API server. The API server is responsible with managing and exposing the state of the cluster. The cluster consists in a set of nodes. Nodes are used to run pods - pods are scheduled to nodes and monitored closely by the cluster. A pod is the atomic unit of deployment in Kubernetes, and contains one or more containers. Pods come and go - if one pod dies, it is not resurrected, but another pod might be scheduled as replacement. In consequence, the IP address of an individual pod cannot be relied on. To provide a stable access point to a set of equivalent pods - which is how applications are deployed on Kubernetes, Kubernetes uses the concept of service, which can be thought of as stable networking for a changing set of pods. A service's IP address and port can be relied on to be stable for the life of the service. All live pods represented by a service at a moment in time are known as the service's endpoint. There are several types of services: ClusterIP, NodePort and LoadBalancer. The association between services and pods they expose is loose, established logically by the service's selector, which is a label-based mechanism: a pod "belongs" to a service if the service's selector matches the pod's labels. A pod by itself has no built-in resilience: if it fails for any reason, it is gone. A higher level concept - the deployment - is used to manage a set of pods from a high availability perspective: the deployment insures that a specific number of equivalent pods is always running, and if one of more pods fail, the deployment brings up replacement pods. The deployment relies on an intermediary concept - the replica set. Deployments are used to implement rolling updates and rollback of pods.

Revision as of 22:06, 10 August 2019

External

Internal

TODO

Deplete Kubernetes Concepts TO DEPLETE.

Overview

Kubernetes is a container orchestrator. To understand how it works is to understand a set of high level concepts, briefly mentioned here. More details on individual concepts are available on their respective pages. All interactions with a Kubernetes cluster are performed by sending REST requests into an API server. The API server is responsible with managing and exposing the state of the cluster. The cluster consists in a set of nodes. Nodes are used to run pods - pods are scheduled to nodes and monitored closely by the cluster. A pod is the atomic unit of deployment in Kubernetes, and contains one or more containers. Pods come and go - if one pod dies, it is not resurrected, but another pod might be scheduled as replacement. In consequence, the IP address of an individual pod cannot be relied on. To provide a stable access point to a set of equivalent pods - which is how applications are deployed on Kubernetes, Kubernetes uses the concept of service, which can be thought of as stable networking for a changing set of pods. A service's IP address and port can be relied on to be stable for the life of the service. All live pods represented by a service at a moment in time are known as the service's endpoint. There are several types of services: ClusterIP, NodePort and LoadBalancer. The association between services and pods they expose is loose, established logically by the service's selector, which is a label-based mechanism: a pod "belongs" to a service if the service's selector matches the pod's labels. A pod by itself has no built-in resilience: if it fails for any reason, it is gone. A higher level concept - the deployment - is used to manage a set of pods from a high availability perspective: the deployment insures that a specific number of equivalent pods is always running, and if one of more pods fail, the deployment brings up replacement pods. The deployment relies on an intermediary concept - the replica set. Deployments are used to implement rolling updates and rollback of pods.