Kubernetes Concepts

From NovaOrdis Knowledge Base
Jump to navigation Jump to search

External

Internal

Overview

Kubernetes is an container orchestration platform, offering the ability of orchestrate Docker containers across multiple hosts. It manages containers in a clustered environment. It orchestrates containers at scale, defines application topologies, handles parts of the container networking, manages container state and schedules containers across hosts.

Node

A node is a Linux container host.

It is based on RHEL or Red Hat Atomic and provides a runtime environment where applications run inside containers, which are contained in pods assigned by the master. Nodes are orchestrated by masters.

Nodes can be organized into many different topologies.

A node daemon runs on node each node.

Pod

https://kubernetes.io/docs/concepts/workloads/pods/pod/


A pod runs one or more containers, deployed together on one host, as a single unit. A pod cannot span hosts.

The pod contains collocated applications that are relatively tightly coupled and run with a shared context. Within that context, an application may have individual cgroups isolation applied. A pod models an application-specific logical host, containing applications that in a pre-container world would have run on the same physical or virtual host.

Each pod has an IP address and can be assigned persistent storage volumes. In consequence, all containers in a pod share the IP address, the volumes and other resources allocated to the pod.

The pod is the smallest unit that can be defined, deployed and managed. Kubernetes orchestrates pods.

Complex applications can be made of any pods.

Verify this applies to Kubernetes as well, not only OpenShift:

Pods are treated as static, and cannot be changed while they are running. To change a pod, the current pod must be terminated, and a new one recreated with a modified base image, configuration, etc.

Pods do not maintain state, they are expendable.

Pods must not created or managed directly, but by controllers.

OpenShift Pod

Pod Lifecycle

  • A pod is defined.
  • A pod is assigned to run on a node.
  • The pod runs until its containers exit or the pod is removed.

Pod Name

Pod must have an unique name in their namespace. The pod definition can specify a base name and use "generateName" attribute to append random characters at the end of the base name, thus generating an unique name.

Storage

Volume

OpenShift Volume

etcd

A distributed key/value datastore for state within the environment.

etcd

Scheduler

Scheduling is essentially the master's main function: when a user decides to create a pod, the master determines where to do this - this is called scheduling. The scheduler is a component that runs on master and determines the best fit for running pods across the environment. The scheduler also spreads pod replicas across nodes, for application HA. The scheduler reads data from the pod definition and tries to find a nod that is a good fit based on configured policies. The scheduler does not modify the pod, it creates a binding that ties the pod to the selected node.

The scheduler is deployed as a container (referred to as an infrastructure container).

OpenShift Scheduler

Namespace

OpenShift Namespace

Policies

Policies are rules that specify which users can and cannot specify actions on objects (pods, services, etc.).

OpenShift Policies

Service

A service represents a group of pods (which may come and go) and provides the permanent IP, hostname and port for other applications to use. A service resource is an abstraction that defines a logical set of pods and a policy that is used to access the pods. The service layer is how applications communicate with one another.

OpenShift Service

API

OpenShift API

Label

Labels are simple key/value pairs that can be used to group and select arbitrarily related objects. Most Kubernetes objects can include labels in their metadata.

OpenShift Label

Selector

A set of labels.

OpenShift Selector

Replication Controller

A component that insures a specified number of pod replicas defined in the environment state are running at all times. If pods exit or are deleted, the replication controller instantiates more pods up to desired number. If there are more pods running than desired, the replication controller deletes as many as necessary. It is NOT the replication controller's job to perform autoscaling based on load or traffic.

OpenShift Replication Controller