Kubernetes Control Plane and Data Plane Concepts: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 75: Line 75:


====Horizontal Pod Autoscaler Controller====
====Horizontal Pod Autoscaler Controller====
* a [[Kubernetes_Horizontal_Pod_Autoscaler#Horizontal_Pod_Autoscaler_Controller|horizontal pod autoscaler controller]]
{{Internal|Kubernetes_Horizontal_Pod_Autoscaler#Horizontal_Pod_Autoscaler_Controller|Horizontal Pod Autoscaler Controller}}


===Cloud Controller Manager===
===Cloud Controller Manager===

Revision as of 22:05, 23 September 2021

External

Internal

Overview

When you deploy Kubernetes, you get a cluster.

Kubernetes Cluster.png

Cluster

A Kubernetes cluster consists of a set of nodes, which all run containerized applications. Of those, a small number are running applications that manage the cluster. They are referred to as master nodes, also collectively known as the control plane. The rest of the nodes, a potentially relatively larger number, but at least one, are the worker nodes. Kubernetes cluster with zero workers nodes are possible, and had been seen, but they are quite useless. The worker nodes run the cluster's workload and are collectively known as the data plane.

Node

https://kubernetes.io/docs/concepts/architecture/nodes/

A node is a Linux host that can run as a VM, a bare-metal device or an instance in a private or public cloud. A node can be a master or worker. In most cases, the term "node" means worker node. The controller manager includes a node controller. Pods are scheduled on nodes. Each Kubernetes node runs a standard set of node components: the kubelet, the kube-proxy and the container runtime.

kubelet

Each node runs an agent called kubelet. For more details see:

kubelet

kube-proxy

https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/

The Kube-proxy is a process running on each node of the cluster. It is responsible with establishing the virtual network across the nodes. Kube-proxy makes sure each node gets its own unique IP address and implements local iptables or IPVS rules to handle routing and traffic on the pod network.

Container Runtime

Container Runtime

Control Plane

Control Plane

The control plane is the collective name for a cluster's master nodes. The control plane consists of cluster control components. The control plane exposes the API via the API Server and contains the cluster store, which stores state in etcd, controller manager, cloud controller manager, scheduler and other management components. The control plane makes workload scheduling decisions, performs monitoring and responds to external and internal events. These components can be run as traditional operating system services (daemons) or as containers. In production environments, the control plane usually runs across multiple computers, providing fault-tolerance and high availability.

Master Node

A master node is a collection of system services that manage the Kubernetes cluster. The master nodes are sometimes called heads or head nodes, and most often simply masters. Collectively, they represent the control plane. While it is possible to execute user workloads on the master node, this is generally not recommended. This approach frees up the master nodes' resources to be exclusively used for cluster management activities.

HA Master Nodes

The recommended configuration includes 3 or 5 replicated masters.

Control Plane System Services

API Server

https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver

The API server is the control plane front-end service. All components (internal system components and external user components) communicate exclusively via the API Server and use the same API. The most common operation is to POST a manifest as part of a REST API invocation - once the invocation is authenticated and authorized, the manifest content is validated, then persisted into the cluster store and various controllers kick in to insure that the cluster state matches the desired state expressed in the manifest. The API Server runtime is represented by the kube-apiserver binary. The API server has admission controllers compiled into its binary. The clients sending API requests into the server can use different authentication strategies. The API server can be accessed from inside pods as https://kubernetes.default.

Admission Controllers

An admission controller is a piece of code that intercepts requests to the API Server prior to persistence of the object, but after the request is authenticated and authorized. Some of these controllers allow dynamic admission control, implemented as extensible functionality that can run as webhooks configured at runtime. More details:

Admission Controller Concepts

API Server Endpoints

API Server Endpoints

Cluster Store

The cluster store is the service that persistently stores the entire configuration and state of the cluster. The cluster store is the only stateful part of the control plane, and the single source of truth for the cluster. The current implementation is based on etcd. Productions deployments run in a HA configuration, with 3 - 5 replicas. etcd prefers consistence over availability, and it will halt updates to the cluster in split brain situations to maintain consistency. However, if etcd becomes unavailable, the applications running on the cluster will continue to work.

Also see:

etcd

Controller Manager

https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager
https://kubernetes.io/docs/concepts/architecture/controller/

The controller manager implements multiple specialized and independent control loops that monitor the state of the cluster and respond to events, insured that the current state of the cluster matches the desired state, as declared to the API server, thus implementing the declarative approach to operations. The controller manager is shipped as a monolithic binary, usually named kube-controller-manager, which exists either as a pod in the "kube-system" namespace or a process on one of the control plane nodes. The controller manager is a "controller of controllers", including [[]], [[]], [[]], [[]], [[]]. The logic implemented by each control loop consists of obtaining the desired state, observing the current state, determining differences and, if differences exist, reconciling differences.

Node Controller

Endpoints Controller

Endpoints Controller

Replicaset Controller

Replicaset Controller

Persistent Volume Controller

Persistent Volume Controller

Horizontal Pod Autoscaler Controller

Horizontal Pod Autoscaler Controller

Cloud Controller Manager

Cloud Controller Manager

The cloud controller manager is a system service that manages integration with the underlying cloud technology and services such as storage and load balancers. It is only present if Kubernetes runs on a cloud like AWS, Azure or GCP.

Scheduler

https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/

The scheduler is a system service whose job is to distribute pods to nodes for execution. An individual pod can be scheduled on one node and one node only, and the target node is chosen by the scheduler as result of the evaluation of a set of predicates (affinity and anti-affinity rules, resource availability, etc.), followed by ranking according to criteria such as whether the node has the image or not, how many pods are already running, etc. The highest ranking node is chosen to run the pod. If the scheduler cannot find a suitable node, the pod goes into a "Pending" state.

Data Plane

Data Plane

The data plane is the set of worker nodes, constituting a layer that provides resources such as CPU, memory, network and storage so that containers can run and connect to network.

Worker Node

A worker node, most often referred simply as "node" (as opposite to master), is where the application services run. Collectively, the worker nodes make up the data plane. A worker node constantly watches for new work assignments, which materialize in the form of pods, which are the components of the application workload.

A Kubernetes cluster can be operational, in that its API server can answer requests, without any worker node. However, that cluster will not be very useful, as it will not be capable of scheduling workloads.