OpenShift Logging Concepts: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 29: Line 29:
  ansible-playbook [-i </path/to/inventory>] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml
  ansible-playbook [-i </path/to/inventory>] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml


Ansible installation deploys all resources needed to support the stack: [[OpenShift_Security_Concepts#Secret|Secrets]], ServiceAccounts and DeploymentConfigs.
Ansible installation deploys all resources needed to support the stack: [[OpenShift_Security_Concepts#Secret|Secrets]], [[OpenShift_Security_Concepts#Service_Account|Service Accounts]] and [[OpenShift_Concepts#DeploymentConfig|DeploymentConfigs]].


=Sizing=
=Sizing=

Revision as of 17:55, 5 August 2017

External

Internal

Overview

OpenShift provides log aggregation with the EFK stack. fluentd is used to capture logs from nodes, pods and application and stored log data in ElasticSearch. Kibana offers a UI for ElasticSearch. fluentd, ElasticSearch and Kibana are deployed as OpenShift pods, on dedicated infrastructure nodes. Logging components communicate securely. They are usually part of the "logging" namespace. Application developers can view the logs for projects they have view access for. Cluster administrators can view all logs.

Logging support is not provided by default but it can be enabled during installation, by setting "openshift_hosted_logging_deploy=true" in the Ansible hosts file.

Installation

Logging must be explicitly enabled during the advanced installation, as described here:

OpenShift hosts File - Logging Configuration

Then, the post-install logging configuration must be applied, as described here:

Post-Install Logging Configuration

There is also a dedicated Ansible playbook that can be used to deploy and upgrade aggregate logging:

ansible-playbook [-i </path/to/inventory>] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml

Ansible installation deploys all resources needed to support the stack: Secrets, Service Accounts and DeploymentConfigs.

Sizing

https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging_sizing.html#install-config-aggregate-logging-sizing

Operation Logs

The operations logs consist of /var/log/messages on nodes and the logs from the projects "default", "open shift", and "openshift-infra". OpenShift gives the option to manage the operation logs with a separated ElasticSearch/Kibana cluster. If openshift_logging_use_ops is set to "true" in the OpenShift Ansible inventory file, Fluentd splits logs between the main cluster and an operation logs cluster. A second Elasticsearch and Kibana are deployed. The deployments are distinguishable by the -ops suffix included in their names.

Organizatorium

Docker Container Logs

https://docs.openshift.com/container-platform/latest/install_config/install/host_preparation.html#managing-docker-container-logs

Docker containers use a json-file logging driver and store logs in /var/lib/docker/containers/<hash>/<hash>-json.log

Aggregated logging is only supported using the journald driver in Docker. More details in https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#fluentd-upgrade-source.