OpenShift Logging Installation: Difference between revisions
Line 21: | Line 21: | ||
==Post-Install Configuration== | ==Post-Install Configuration== | ||
In order for the logging pods to spread evenly across the cluster, an empty [[OpenShift Concepts#Node_Selector|node selector]] should be used. The "logging" project should have been created already. The empty [[OpenShift_Concepts#Per-Project_Node_Selector|per-project node selector]] can be updates as follows: | In order for the logging pods to spread evenly across the cluster, an empty [[OpenShift Concepts#Node_Selector|node selector]] should be used. The "logging" project should have been created already. The empty [[OpenShift_Concepts#Per-Project_Node_Selector|per-project node selector]] can be updates as follows: | ||
Line 34: | Line 32: | ||
... | ... | ||
This is required by the fluentd DaemonSet to work correctly. | |||
==Installation Validation== | ==Installation Validation== |
Latest revision as of 03:18, 11 November 2017
Internal
Procedure
Installation During the Main Procedure
Logging must be explicitly enabled during the advanced installation, as described here:
Ansible installation deploys all resources needed to support the stack: Secrets, Service Accounts and DeploymentConfigs.
Installation Independent of the Main Procedure
There is also a dedicated Ansible playbook that can be used to deploy and upgrade logging independently of the main installation procedure:
ansible-playbook [-i </path/to/inventory>] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml
Post-Install Configuration
In order for the logging pods to spread evenly across the cluster, an empty node selector should be used. The "logging" project should have been created already. The empty per-project node selector can be updates as follows:
oc edit namespace logging
... metadata: annotations: openshift.io/node-selector: "" ...
This is required by the fluentd DaemonSet to work correctly.
Installation Validation
ElasticSearch
ElasticSearch should be deployed, running, and operational - logs must not contain errors:
oc project logging oc get pods -l 'component=es' oc logs -f logging-es-3fs5ghyo-3-88749
or
oc logs -f logging-es-data-master-ebitzkr3-1-x0jch
fluentd
All nodes should run a fluentd pod, and the fluentd pods should be operational, their logs must not contain errors.
oc project logging oc get pods -l 'component=fluentd' NAME READY STATUS RESTARTS AGE logging-fluentd-2r4rt 1/1 Running 0 49m logging-fluentd-37d72 1/1 Running 0 35m logging-fluentd-4ljkn 1/1 Running 0 3h logging-fluentd-74l39 1/1 Running 0 3h logging-fluentd-7l25h 1/1 Running 0 3h logging-fluentd-sbh7r 1/1 Running 0 3h logging-fluentd-w4shg 1/1 Running 0 39m
oc logs -f logging-fluentd-2r4rt ...
Kibana
ElasticSearch should be deployed, running, and operational - logs must not contain errors:
oc project logging oc get pods -l 'component=kibana' oc logs -f -c kibana-proxy logging-kibana-10-sb7sk oc logs -f -c kibana logging-kibana-10-sb7sk
The Logging Portal
The logging portal should be available: