OpenShift Logging Concepts: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
No edit summary
 
(29 intermediate revisions by the same user not shown)
Line 1: Line 1:
=External=
=External=


* https://kubernetes.io/docs/concepts/cluster-administration/logging/
* https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html
* https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html
* https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html#advanced-install-cluster-logging
* https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html#advanced-install-cluster-logging
Line 11: Line 12:
=Overview=
=Overview=


OpenShift provides log aggregation with the [[Elastic Stack#EFK|EFK stack]]. [[fluentd]] is used to capture logs from nodes, pods and application and stored log data in [[ElasticSearch]]. [[Kibana]] offers a UI for [[ElasticSearch]]. [[fluentd]], [[ElasticSearch]] and [[Kibana]] are deployed as OpenShift pods, on dedicated [[#Infrastructure_Node|infrastructure nodes]]. Logging components communicate securely. They are usually part of the "logging" namespace. Application developers can view the logs for projects they have view access for. [[OpenShift_Security_Concepts#Cluster_Administrator|Cluster administrators]] can view all logs.
OpenShift provides log aggregation with the [[Elastic Stack#EFK|EFK stack]]. [[fluentd]] is used to capture logs from nodes, pods and application and stored log data in [[Elasticsearch]]. [[Kibana]] offers a UI for [[Elasticsearch]]. [[fluentd]], [[Elasticsearch]] and [[Kibana]] are deployed as OpenShift pods, on dedicated [[#Infrastructure_Node|infrastructure nodes]]. Logging components communicate securely. They are usually part of the "logging" namespace. Application developers can view the logs for projects they have view access for. [[OpenShift_Security_Concepts#Cluster_Administrator|Cluster administrators]] can view all logs.


Logging support is not provided by default but it can be enabled during installation, by setting "[[OpenShift_hosts#openshift_hosted_logging_deploy|openshift_hosted_logging_deploy]]=true" in the [[OpenShift_hosts|Ansible hosts file]].
Logging support is not provided by default but it can be enabled during installation, by setting "[[OpenShift_hosts#openshift_hosted_logging_deploy|openshift_hosted_logging_deploy]]=true" in the [[OpenShift_hosts|Ansible hosts file]].


=Installation=
=OpenShift Master and Node Processes Logging Level=


Logging must be explicitly enabled during the advanced installation, as described here:
{{Internal|OpenShift Logging Levels|OpenShift Logging Levels}}


{{Internal|OpenShift_hosts#Logging_Configuration|OpenShift hosts File - Logging Configuration}}
=Installation=


Then, the post-install logging configuration must be applied, as described here:
{{Internal|OpenShift Logging Installation|Logging Installation}}


{{Internal|OpenShift_3.5_Installation#Post-Install_Logging_Configuration|Post-Install Logging Configuration}}
=The "logging" Project=


There is also a dedicated Ansible playbook that can be used to deploy and upgrade aggregate logging.
For more about projects, see [[OpenShift Concepts#Project|OpenShift Concepts - Projects]].


=Sizing=
=Sizing=
Line 33: Line 34:
=Operation Logs=
=Operation Logs=


The operations logs consist of /var/log/messages on nodes and the logs from the projects "default", "open shift", and "openshift-infra". OpenShift gives the option to manage the operation logs with a separated ElasticSearch/Kibana cluster. If [[OpenShift_hosts#openshift_logging_use_ops|openshift_logging_use_ops]] is set to "true", Fluentd splits logs between the main cluster and an operation logs cluster. A second Elasticsearch and Kibana are deployed. The deployments are distinguishable by the -ops suffix included in their names.
The operations logs consist of /var/log/messages on nodes and the logs from the projects "default", "open shift", and "openshift-infra". OpenShift gives the option to manage the operation logs with a separated ElasticSearch/Kibana cluster. If [[OpenShift_hosts#Other_Logging_Configuration_Options|openshift_logging_use_ops]] is set to "true" in the OpenShift Ansible inventory file, Fluentd splits logs between the main cluster and an operation logs cluster. A second Elasticsearch and Kibana are deployed. The deployments are distinguishable by the -ops suffix included in their names.
 
=Ops Cluster=
 
{{External|https://docs.openshift.com/container-platform/3.5/install_config/aggregate_logging.html#aggregated-ops}}
 
=Components=
 
* [[Elasticsearch and OpenShift|Elasticsearch]]
* [[Fluentd and OpenShift|Fluentd]]
* <span id='Kibana'></span>[[Kibana and OpenShift|Kibana]]
* [[OpenShift Curator|Curator]]


=Organizatorium=
=Organizatorium=
==Logging - From Source to Human==
Anything sent to stdout/stderr of a container is managed by Docker and placed in host filesystem files.
In OpenShift that goes to /var/lib/docker/containers/<''container-id''>/<''container-id''>-json.log
A line in the Docker log file might look like this JSON:
{"log":"2014/09/25 21:15:03 something\n",
"stream":"stderr",
  "time":"2014-09-25T21:15:03.499185026Z"}
The Kubernetes kubelet makes a symbolic link to this file on the host machine
in the /var/log/containers directory which includes the pod name and the Kubernetes
container name:
  synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
  ->
  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
More details about this: {{External|https://docs.openshift.com/container-platform/3.6/install_config/install/host_preparation.html#managing-docker-container-logs}}
<font color=red>NOKB the log processing process, from a container generating the logs to oc logs.</font>
TO PROCESS: https://kubernetes.io/docs/concepts/cluster-administration/logging/


==Docker Container Logs==
==Docker Container Logs==
Line 44: Line 83:


<font color=red>Aggregated logging is only supported using the journald driver in Docker. More details in https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#fluentd-upgrade-source.</font>
<font color=red>Aggregated logging is only supported using the journald driver in Docker. More details in https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#fluentd-upgrade-source.</font>
The size of the docker-managed logs is set in the Docker's sysconfig file:
OPTIONS='...  --log-opt max-size=10M --log-opt max-file=5 --log-level="warn"'
This is explained here: https://docs.openshift.com/container-platform/latest/install_config/install/host_preparation.html#managing-docker-container-logs
{{Internal|Docker_Concepts#Logging|Docker Logging}}

Latest revision as of 02:21, 22 November 2021

External

Internal

Overview

OpenShift provides log aggregation with the EFK stack. fluentd is used to capture logs from nodes, pods and application and stored log data in Elasticsearch. Kibana offers a UI for Elasticsearch. fluentd, Elasticsearch and Kibana are deployed as OpenShift pods, on dedicated infrastructure nodes. Logging components communicate securely. They are usually part of the "logging" namespace. Application developers can view the logs for projects they have view access for. Cluster administrators can view all logs.

Logging support is not provided by default but it can be enabled during installation, by setting "openshift_hosted_logging_deploy=true" in the Ansible hosts file.

OpenShift Master and Node Processes Logging Level

OpenShift Logging Levels

Installation

Logging Installation

The "logging" Project

For more about projects, see OpenShift Concepts - Projects.

Sizing

https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging_sizing.html#install-config-aggregate-logging-sizing

Operation Logs

The operations logs consist of /var/log/messages on nodes and the logs from the projects "default", "open shift", and "openshift-infra". OpenShift gives the option to manage the operation logs with a separated ElasticSearch/Kibana cluster. If openshift_logging_use_ops is set to "true" in the OpenShift Ansible inventory file, Fluentd splits logs between the main cluster and an operation logs cluster. A second Elasticsearch and Kibana are deployed. The deployments are distinguishable by the -ops suffix included in their names.

Ops Cluster

https://docs.openshift.com/container-platform/3.5/install_config/aggregate_logging.html#aggregated-ops

Components

Organizatorium

Logging - From Source to Human

Anything sent to stdout/stderr of a container is managed by Docker and placed in host filesystem files.

In OpenShift that goes to /var/lib/docker/containers/<container-id>/<container-id>-json.log

A line in the Docker log file might look like this JSON:

{"log":"2014/09/25 21:15:03 something\n",

"stream":"stderr",
 "time":"2014-09-25T21:15:03.499185026Z"}

The Kubernetes kubelet makes a symbolic link to this file on the host machine in the /var/log/containers directory which includes the pod name and the Kubernetes container name:

  synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
  ->
  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log


More details about this:

https://docs.openshift.com/container-platform/3.6/install_config/install/host_preparation.html#managing-docker-container-logs

NOKB the log processing process, from a container generating the logs to oc logs.

TO PROCESS: https://kubernetes.io/docs/concepts/cluster-administration/logging/

Docker Container Logs

https://docs.openshift.com/container-platform/latest/install_config/install/host_preparation.html#managing-docker-container-logs

Docker containers use a json-file logging driver and store logs in /var/lib/docker/containers/<hash>/<hash>-json.log

Aggregated logging is only supported using the journald driver in Docker. More details in https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#fluentd-upgrade-source.

The size of the docker-managed logs is set in the Docker's sysconfig file:

OPTIONS='...  --log-opt max-size=10M --log-opt max-file=5 --log-level="warn"'

This is explained here: https://docs.openshift.com/container-platform/latest/install_config/install/host_preparation.html#managing-docker-container-logs

Docker Logging