Kubernetes Concepts: Difference between revisions
Line 40: | Line 40: | ||
=Policies= | =Policies= | ||
Policies are rules that specify which users can and cannot specify actions on objects. | |||
{{Internal|OpenShift Concepts#Policies|OpenShift Policies}} | {{Internal|OpenShift Concepts#Policies|OpenShift Policies}} |
Revision as of 22:57, 29 April 2017
External
Internal
Overview
Pod
One or more containers deployed together on one host, containing collocated applications that are relatively tightly coupled and run with a shared context. The containers in a pod share resources as IP addresses and volumes. The pod is the smallest unit that can be defined, deployed and managed. Kubernetes orchestrates pods.
Complex applications can be made of any pods.
Storage
Volume
etcd
A distributed key/value datastore for state within the environment.
Scheduler
The scheduler is a component that runs on master and determines the best fit for running pods across the environment. The scheduler also spreads pod replicas across nodes, for application HA. The scheduler reads data from the pod definition and tries to find a nod that is a good fit based on configured policies. The scheduler does not modify the pod, it creates a binding that ties the pod to the selected node.
Namespace
Policies
Policies are rules that specify which users can and cannot specify actions on objects.
Service
A service represents a group of pods and provides the permanent IP, hostname and port for other applications to use. A service resource is an abstraction that defines a logical set of pods and a policy that is used to access the pods. The service layer is how applications communicate with one another.
API
Label
Labels are simple key/value pairs that can be used to group and select arbitrarily related objects. Most Kubernetes objects can include labels in their metadata.
Selector
A set of labels.
Replication Controller
A component that insures a specified number of pod replicas are running at all times. If pods exit or are deleted, the replication controller instantiates more pods up to desired number. If there are more pods running than desired, the replication controller deletes as many as necessary. It is NOT the replication controller's job to perform autoscaling based on load or traffic.