OpenShift Troubleshooting: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 47: Line 47:


* https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#troubleshooting-kibana
* https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#troubleshooting-kibana
Authorization Redirect:
https://master.openshift.novaordis.io/oauth/authorize?
  response_type=code&
  redirect_uri=https%3A%2F%2Fkibana.apps.openshift.novaordis.io%2Fauth%2Fopenshift%2Fcallback&
  scope=user%3Ainfo%20user%3Acheck-access%20user%3Alist-projects&
  client_id=kibana-proxy

Revision as of 17:45, 12 October 2017

Internal

Overview

The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here:

Change the Log Level for OpenShift Processes

--loglevel=10 seems to work fine.

Then tail the journalctl log. More details on getting logging information:

OpenShift Master and Node Process Logging

Metrics Troubleshooting

oadm diagnostics MetricsApiProxy

Troubleshooting Pods

oc -n <project-name> get pods
oc -n <project-name> describe po/<pod-name>

...
Name:			logging-fluentd-3kz30
...
Node:			node2/
...
Status:			Failed
Reason:			MatchNodeSelector
Message:		Pod Predicate MatchNodeSelector failed

In-line pod logs:

oc logs -f <pod-name>

Troubleshooting Routes

Logging

OpenShift Logging

Troubleshooting Kibana