OpenShift Logging Installation: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 22: Line 22:
==Post-Install Configuration==
==Post-Install Configuration==


{{Internal|OpenShift_3.5_Installation#Post-Install_Logging_Configuration|Post-Install Logging Configuration}}
<font color=red>'''TODO''' this in the light of the fluentd DaemonSet reconfiguration:
 
In order for the logging pods to spread evenly across the cluster, an empty [[OpenShift Concepts#Node_Selector|node selector]] should be used. The "logging" project should have been created already. The empty [[OpenShift_Concepts#Per-Project_Node_Selector|per-project node selector]] can be updates as follows:
 
oc edit namespace logging
 
...
metadata:
  annotations:
    <b>openshift.io/node-selector: ""</b>
...
 
</font>


==Installation Validation==
==Installation Validation==

Revision as of 03:16, 11 November 2017

Internal

Procedure

Installation During the Main Procedure

Logging must be explicitly enabled during the advanced installation, as described here:

openshift_hosted_logging_deploy

Ansible installation deploys all resources needed to support the stack: Secrets, Service Accounts and DeploymentConfigs.

Installation Independent of the Main Procedure

There is also a dedicated Ansible playbook that can be used to deploy and upgrade logging independently of the main installation procedure:

ansible-playbook [-i </path/to/inventory>] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml

Post-Install Configuration

TODO this in the light of the fluentd DaemonSet reconfiguration:

In order for the logging pods to spread evenly across the cluster, an empty node selector should be used. The "logging" project should have been created already. The empty per-project node selector can be updates as follows:

oc edit namespace logging
...
metadata:
  annotations:
    openshift.io/node-selector: ""
...

Installation Validation

ElasticSearch

ElasticSearch should be deployed, running, and operational - logs must not contain errors:

oc project logging
oc get pods -l 'component=es'
oc logs -f logging-es-3fs5ghyo-3-88749

or

oc logs -f logging-es-data-master-ebitzkr3-1-x0jch

fluentd

All nodes should run a fluentd pod, and the fluentd pods should be operational, their logs must not contain errors.

oc project logging
oc get pods -l 'component=fluentd'
NAME                    READY     STATUS    RESTARTS   AGE
logging-fluentd-2r4rt   1/1       Running   0          49m
logging-fluentd-37d72   1/1       Running   0          35m
logging-fluentd-4ljkn   1/1       Running   0          3h
logging-fluentd-74l39   1/1       Running   0          3h
logging-fluentd-7l25h   1/1       Running   0          3h
logging-fluentd-sbh7r   1/1       Running   0          3h
logging-fluentd-w4shg   1/1       Running   0          39m
oc logs -f logging-fluentd-2r4rt 
...

Kibana

ElasticSearch should be deployed, running, and operational - logs must not contain errors:

oc project logging
oc get pods -l 'component=kibana'
oc logs -f -c kibana-proxy logging-kibana-10-sb7sk
oc logs -f -c kibana logging-kibana-10-sb7sk

The Logging Portal

The logging portal should be available:

https://kibana.apps.openshift.novaordis.io/