OpenShift Logging Installation: Difference between revisions
No edit summary |
|||
(11 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
* [[OpenShift_Logging_Concepts#Installation|OpenShift Logging Concepts]] | * [[OpenShift_Logging_Concepts#Installation|OpenShift Logging Concepts]] | ||
* [[OpenShift_3.6_Installation#Logging_Installation|OpenShift 3.6. Installation]] | |||
=Procedure= | |||
==Installation During the Main Procedure== | |||
Logging must be explicitly enabled during the advanced installation, as described here: | Logging must be explicitly enabled during the advanced installation, as described here: | ||
{{Internal|OpenShift_hosts# | {{Internal|OpenShift_hosts#openshift_hosted_logging_deploy|openshift_hosted_logging_deploy}} | ||
Ansible installation deploys all resources needed to support the stack: [[OpenShift_Security_Concepts#Secret|Secrets]], [[OpenShift_Security_Concepts#Service_Account|Service Accounts]] and [[OpenShift_Concepts#DeploymentConfig|DeploymentConfigs]]. | |||
==Installation Independent of the Main Procedure== | |||
There is also a dedicated Ansible playbook that can be used to deploy and upgrade | There is also a dedicated Ansible playbook that can be used to deploy and upgrade logging independently of the main installation procedure: | ||
ansible-playbook [-i </path/to/inventory>] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml | ansible-playbook [-i </path/to/inventory>] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml | ||
==Post-Install Configuration== | |||
In order for the logging pods to spread evenly across the cluster, an empty [[OpenShift Concepts#Node_Selector|node selector]] should be used. The "logging" project should have been created already. The empty [[OpenShift_Concepts#Per-Project_Node_Selector|per-project node selector]] can be updates as follows: | |||
oc edit namespace logging | |||
... | |||
metadata: | |||
annotations: | |||
<b>openshift.io/node-selector: ""</b> | |||
... | |||
This is required by the fluentd DaemonSet to work correctly. | |||
==Installation Validation== | |||
===ElasticSearch=== | |||
ElasticSearch should be deployed, running, and operational - logs must not contain errors: | |||
oc project logging | |||
oc get pods -l 'component=es' | |||
oc logs -f logging-es-3fs5ghyo-3-88749 | |||
or | |||
oc logs -f logging-es-data-master-ebitzkr3-1-x0jch | |||
===fluentd=== | |||
All nodes should run a fluentd pod, and the fluentd pods should be operational, their logs must not contain errors. | |||
oc project logging | |||
oc get pods -l 'component=fluentd' | |||
NAME READY STATUS RESTARTS AGE | |||
logging-fluentd-2r4rt 1/1 Running 0 49m | |||
logging-fluentd-37d72 1/1 Running 0 35m | |||
logging-fluentd-4ljkn 1/1 Running 0 3h | |||
logging-fluentd-74l39 1/1 Running 0 3h | |||
logging-fluentd-7l25h 1/1 Running 0 3h | |||
logging-fluentd-sbh7r 1/1 Running 0 3h | |||
logging-fluentd-w4shg 1/1 Running 0 39m | |||
oc logs -f logging-fluentd-2r4rt | |||
... | |||
===Kibana=== | |||
ElasticSearch should be deployed, running, and operational - logs must not contain errors: | |||
oc project logging | |||
oc get pods -l 'component=kibana' | |||
oc logs -f -c kibana-proxy logging-kibana-10-sb7sk | |||
oc logs -f -c kibana logging-kibana-10-sb7sk | |||
===The Logging Portal=== | |||
The logging portal should be available: | |||
https://kibana.apps.openshift.novaordis.io/ |
Latest revision as of 03:18, 11 November 2017
Internal
Procedure
Installation During the Main Procedure
Logging must be explicitly enabled during the advanced installation, as described here:
Ansible installation deploys all resources needed to support the stack: Secrets, Service Accounts and DeploymentConfigs.
Installation Independent of the Main Procedure
There is also a dedicated Ansible playbook that can be used to deploy and upgrade logging independently of the main installation procedure:
ansible-playbook [-i </path/to/inventory>] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml
Post-Install Configuration
In order for the logging pods to spread evenly across the cluster, an empty node selector should be used. The "logging" project should have been created already. The empty per-project node selector can be updates as follows:
oc edit namespace logging
... metadata: annotations: openshift.io/node-selector: "" ...
This is required by the fluentd DaemonSet to work correctly.
Installation Validation
ElasticSearch
ElasticSearch should be deployed, running, and operational - logs must not contain errors:
oc project logging oc get pods -l 'component=es' oc logs -f logging-es-3fs5ghyo-3-88749
or
oc logs -f logging-es-data-master-ebitzkr3-1-x0jch
fluentd
All nodes should run a fluentd pod, and the fluentd pods should be operational, their logs must not contain errors.
oc project logging oc get pods -l 'component=fluentd' NAME READY STATUS RESTARTS AGE logging-fluentd-2r4rt 1/1 Running 0 49m logging-fluentd-37d72 1/1 Running 0 35m logging-fluentd-4ljkn 1/1 Running 0 3h logging-fluentd-74l39 1/1 Running 0 3h logging-fluentd-7l25h 1/1 Running 0 3h logging-fluentd-sbh7r 1/1 Running 0 3h logging-fluentd-w4shg 1/1 Running 0 39m
oc logs -f logging-fluentd-2r4rt ...
Kibana
ElasticSearch should be deployed, running, and operational - logs must not contain errors:
oc project logging oc get pods -l 'component=kibana' oc logs -f -c kibana-proxy logging-kibana-10-sb7sk oc logs -f -c kibana logging-kibana-10-sb7sk
The Logging Portal
The logging portal should be available: