OpenShift Container Probes: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(27 intermediate revisions by the same user not shown)
Line 1: Line 1:
=External=
* https://kubernetes.io/docs/concepts/workloads/pods/pod/
* https://blog.openshift.com/kubernetes-pods-life/
* https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/
=Internal=
* [[OpenShift Concepts#Pod|OpenShift Concepts]]
* [[OpenShift Pod Operations|Pod Operations]]
* [[OpenShift Pod Definition|Pod Definition]]
=Overview=
A ''pod'' is a Kubernetes primitive that logically encapsulates one or more [[Docker Concepts#Container|Docker containers]], deployed together onto a [[OpenShift_Concepts#Node|node]], as a single unit.
The containers of a pod are managed individually by the Docker server, they do not exist inside any physical super-structure. However, they are ''handled'' as a unit by Kubernetes. The defining characteristic of a pod is that all its containers share a virtual network device - an unique IP - and a set of [[OpenShift_Volume_Concepts#Volume_Types|volumes]]. Pods also define the security and runtime policy for each container. The [[OpenShift_Concepts#Pod_IP_Address|pod IP address]] is routable by default from any other pod in the project. Depending on the [[OpenShift Network Plugins#Overview|network plugin]] configured for a specific cluster, pods may also be reachable across the entire cluster. The default addresses are part of the 10.x.x.x set. The containers on a pod share the IP address and TCP ports, because they share the pod's virtual network device.
The pod is intended to contains collocated applications that are relatively tightly coupled and run with a ''shared context''. Within that context, an application may have individual [[Linux cgroups|cgroups]] isolation applied. A pod models an application-specific ''logical host'', containing applications that in a pre-container world would have run on the same physical or virtual host, and in consequence, the pod cannot span hosts. The pod is the ''smallest unit'' that can be defined, deployed and managed by OpenShift. Complex applications can be made of any number of pods, and OpenShift helps with pod orchestration.
Pods do not maintain state, they are expendable. OpenShift treats pods as static, largely immutable - changes cannot be made to a pod definition while the pod is running - and expendable, they do not maintain state when they are destroyed and recreated. Therefore, they are managed by  [[#Controller|controllers]], which are specified in the pod description, not directly by users. To change a pod, the current pod must be terminated, and a new one with a modified base image and/or configuration must be created.
A pod may contain one or more [[#Application_Container|application containers]], one or more [[OpenShift Init Container#Overview|init containers]].
The pods for a project are displayed by the following commands, and also be viewed in the web console to the project -> Applications -> Pods:
[[oc get#all|oc get all]]
[[Oc_get#pods.2C_po|oc get pods]]
=<span id='Pod_Definition_File'></span>Pod Definition=
{{Internal|OpenShift Pod Definition|Pod Definition}}
The definition of an existing pod can be obtained with
oc get -o yaml pod <''pod-name''>
[[oc describe#pod|oc describe pod]]
=Container Types=
==Application Container==
If a pod declares [[#Init_Container|init containers]], the application containers are only run after all init container complete successfully.
==Init Container==
{{Internal|OpenShift Init Container|Init Container}}
=Controller=
A controller is the OpenShift component that creates and manages pods. The controller of a pod is reported by [[oc describe#pod|oc describe pod]] command, under the "Controllers" section:
...
Controllers: ReplicationController/logging-kibana-1
...
The most common controllers are:
* [[OpenShift_Concepts#Replication_Controller|Replication Controllers]]
* <span id='DaemonSet'></span>[[OpenShift DaemonSet Concepts#Overview|DaemonSets]]
=Pod Name=
Pod must have an unique name in their namespace (project). The pod definition can specify a base name and use "generateName" attribute to append random characters at the end of the base name, thus generating an unique name.
=Pod Type=
==Terminating==
A ''terminating'' pod has non-zero positive integer as value of "spec.activeDeadlineSeconds". Builder or deployer pods are terminating pods. The pod type can be specified as [[OpenShift_Resource_Management_Concepts#Quota_Scope|scope]] for resource quotas.
==NonTerminating==
A ''non-terminating'' pod has no "spec.activeDeadlineSeconds" specification (nil). Long running pods as a web server or a database are non-terminating pods. The pod type can be specified as [[OpenShift_Resource_Management_Concepts#Quota_Scope|scope]] for resource quotas.
=Pod Lifecycle=
* A pod is defined in a [[#Pod_Definition|pod definition]].
* A pod is instantiated and assigned to run on a node as a result of the [[OpenShift_Concepts#Scheduler|scheduling process]].
* The pod runs until its containers exit or the pod is removed. Its [[#phase|phase]] is reflected by the value of the [[#phase|status.phase]] field of the state maintained by Kubernetes for the pod.
* Depending on policy and exit code, may be removed or retained to enable access to their container's logs.
=Pod Status=
The pod's status is reflected in the status field of the oc get command result:
'''status''':
  hostIP: 192.168.122.26
  '''[[#phase|phase]]''': <font color=teal>Running</font>
  podIP: 10.130.1.26
  qosClass: BestEffort
  startTime: 2018-02-28T20:03:00Z
  '''[[#conditions|conditions]]''':
  - lastProbeTime: null
    lastTransitionTime: 2018-02-28T20:03:00Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-02-28T20:03:20Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-02-28T20:03:00Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://b25[...]
    image: docker.io/novaordis/rest-service@sha256:81e[...]
    imageID: docker-pullable://docker.io/novaordis/rest-service@sha256:81e1[...]
    lastState: {}
    name: rest-service
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-02-28T20:03:07Z
==phase==
The phase is a simple, high-level summary of where the pod is in its lifecycle. The phase is not intended to be a comprehensive rollup of observations of Container or Pod state, nor is it intended to be a comprehensive state machine. The following values are possible:
===Pending===
The Pod has been accepted by Kubernetes, but one or more of its containers have not been created. This includes time before being scheduled as well as time spent downloading images over the network.
===Running===
The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
===Unknown===
The state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.
===Terminal State===
A pod is in a terminal state if "status.phase" is either [[#Failed|Failed]] or [[#Succeeded|Succeeded]]:
====Succeeded====
A terminal state: all containers in the pod have terminated in success, and will not be restarted.
====Failed====
A terminal state: all containers in the pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
==conditions==
The pod status includes an array of ''pod conditions'', which are essentially type/status pairs.
The types can be:
* PodScheduled
* Ready
* Initialized
* Unschedulable.
The status field is a string, with possible values True, False, and Unknown.
=Pod Placement=
{{External|https://docs.openshift.com/container-platform/3.5/admin_guide/scheduler.html#controlling-pod-placement}}
Pods can be configured to execute on a specific node, defined by the node name, or on nodes that match a specific [[OpenShift_Configuration#Node_Selector|node selector]].
To assign a pod to a ''specific node'', <font color=red>TODO https://docs.openshift.com/container-platform/3.5/admin_guide/scheduler.html#constraining-pod-placement-labels</font>
To assign a pod to ''nodes that match a node selector'', add the "nodeSelector" element in the pod configuration, with a value consisting in key/value pairs, as described here:
{{Internal|OpenShift_Deployment_Operations#Assigning_a_Pod_to_Nodes_that_Match_a_Node_Selector|Assigning a Pod to Nodes that Match a Node Selector}}
After a successful placement, either by a replication controller or by a DaemonSet, the pod <span id='successful_node_selector_recorded'></span>records the successful node selector expression as part of its definition, which can be rendered with oc get pod -o yaml:
spec:
  ...
  nodeSelector:
    logging: "true"
  ...
<font color=red>TODO Consolidate with [[OpenShift_Concepts#Node_Selector]]</font>
Once bound to a node, a Pod will never be rebound to another node.
=<span id='Pod_Probe'></span>Container Probe=
=<span id='Pod_Probe'></span>Container Probe=


{{External|https://docs.openshift.com/container-platform/latest/dev_guide/application_health.html}}
{{External|https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes}}


Users can configure ''container probes'' for liveness or readiness. Sometimes they are referred as "pod probes", but they are configured at container-level, not pod-level. Each container can have its own probe set, which are exercised, and return results, independently. They are specified in the [[OpenShift Pod Definition#example_containers|pod template]].  
Users can configure ''container probes'' for liveness or readiness. Sometimes they are referred as "pod probes", but they are configured at container-level, not pod-level. Each container can have its own probe set, which are exercised, and return results, independently. They are specified in the [[OpenShift Pod Definition#example_containers|pod template]].  
Line 260: Line 78:


Note that if the pod "heals" - the readiness probe starts passing after the configured number of successful run reaches successThreshold, the endpoint is re-attached to the service, automatically.
Note that if the pod "heals" - the readiness probe starts passing after the configured number of successful run reaches successThreshold, the endpoint is re-attached to the service, automatically.
==Probe Types==
===Container Execution Checks===
Kubernetes executes the command specified by "exec" inside the container. If the command exists with 0, the probe execution is considered a success, anything else is a failure.
===HTTP Checks===
===TCP Socket Checks===
==Application Heath Operations==
{{Internal|OpenShift Application Health Operations|Application Health Operations}}
=Local Manifest Pod=
{{External|https://docs.openshift.com/container-platform/latest/install_config/master_node_configuration.html#node-configuration-files}}
=Bare Pod=
{{External|https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#bare-pods}}
A pod that is not backed by a [[#Replication_Controller|replication controller]]. Bare pods cannot be evacuated from nodes.
=Static Pod=
{{External|https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#static-pods}}
=Pod Presets=
{{External|https://docs.openshift.com/container-platform/latest/dev_guide/pod_preset.html}}
=Container Restart Policy=
{{External|https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy}}
[[OpenShift Pod Definition#example_restartPolicy|restartPolicy]] may be:
* '''Always''' (default)
* '''OnFailure'''
* '''Never'''
The same restartPolicy applies to all containers in the pod.
Failed containers that are restarted by Kubernetes are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution.

Latest revision as of 00:39, 1 November 2019

Container Probe

Users can configure container probes for liveness or readiness. Sometimes they are referred as "pod probes", but they are configured at container-level, not pod-level. Each container can have its own probe set, which are exercised, and return results, independently. They are specified in the pod template.

A probe is executed periodically by Kubernetes, and consists in a diagnostic on the container, which may have one of the following results: Success, which means the container passed the diagnostic, Failure, meaning that the container failed the diagnostic and Unknown, which means the diagnostic execution itself failed and no action should be taken.

Liveness Probe

A liveness probe indicates whether the container is running. If the liveness probe fails, Kubernetes kills the container, and the container is subjected to its restart policy, as described in Liveness Probe Failure. If a container does not provide a liveness probe, the liveness diagnostic is considered successful by default.

The following sequence should go in the container declaration from the pod template, at the same level as "name":

livenessProbe: 
 
  initialDelaySeconds: 30
  timeoutSeconds: 1
  successThreshold: 1
  failureThreshold: 3
  periodSeconds: 10

  tcpSocket: 
      port: 5432

Readiness Probe

A readiness probe is deployed in a container to expose whether the container is ready to service requests. If a container does not provide a readiness probe, the readiness state after creation is by default "Success". On readiness probe failure, Kubernetes will stop sending traffic into that specific pod, by removing the corresponding endpoint form the service, as described in the readiness probe failure section. What about router?. A readiness probe is useful when we want to automatically stop sending traffic if a pod enters an unstable state, and resume sending traffic into it if, and when it recovers. This could also be used in implementing a mechanism to allow taking the container down for maintenance. Note that if you just want to be able to drain requests when the pod is deleted, you do not necessarily need a readiness probe; on deletion, the pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The pod remains in the unready state while it waits for the containers in the pod to stop.

The following sequence should go in the container declaration from the pod template, at the same level as "name":

readinessProbe:

  initialDelaySeconds: 5
  timeoutSeconds: 1
  successThreshold: 1
  failureThreshold: 3
  periodSeconds: 10

  exec:
   command:
    - /bin/sh
    - -i
    - -c
    - psql -h 127.0.0.1 -U $POSTGRESQL_USER -q -d $POSTGRESQL_DATABASE -c 'SELECT 1'

Probe Operations

After the container is started, Kubernetes waits for initialDelaySeconds, specified in seconds, then it triggers the execution of the probe specified by "exec", "httpGet", "tcpSocket", etc. Once the probe execution is started, Kubernetes waits for timeoutSeconds (default 1 second) for the probe execution to complete.

If the probe execution is successful, the success counts towards the successThreshold_initialDelaySeconds. A total number of consecutive successful execution specified in successThreshold must be counted after a failure, for the container to be considered as passing the probe. For liveness probes, this value must be 1. The default value is 1.

If the probe does not complete within timeoutSeconds seconds or it explicitly fails, the failure counts towards the failureThreshold. A total number of successive failed execution specified in failureThreshold must be counted before the container to be considered as failing the probe.

The probe is executed periodically with a periodicity of periodSeconds.

Liveness Probe Failure

If the liveness probe fails, Kubernetes kills the container and the container is subjected to its restart policy. A liveness probe that fails occasionally is indicated by the number of restarts:

NAME                   READY     STATUS    RESTARTS   AGE 
rest-service-1-9p9hj   1/1       Running   3          1m

Note that a pod will maintain its name after a restart.

If the liveness probe fails consistently, the pod enters a crash loop backoff state What is exactly the condition that makes it go from "Running" to "CrashLoopBackOff"?:

NAME                   READY     STATUS             RESTARTS   AGE
rest-service-1-9p9hj   0/1       CrashLoopBackOff   5          3m

Readiness Probe Failure

If the readiness probe fails, the EndpointsController removes the Pod’s IP address from the endpoints of all Services that match the Pod. The service will still exist, but it'll list less endpoints. If the service is backed by one-replica pod, it'll have zero endpoints.

The container will still show in a Running phase (status), but it will not be "READY".

NAME                       READY     STATUS    RESTARTS   AGE
po/rest-service-3-bm1t9    0/1       Running   0          2m

Note that if the pod "heals" - the readiness probe starts passing after the configured number of successful run reaches successThreshold, the endpoint is re-attached to the service, automatically.