OpenShift Troubleshooting: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 60: Line 60:
==Troubleshooting Kibana==
==Troubleshooting Kibana==


Login loop troubleshooting:
{{Internal|Kibana and OpenShift#Troubleshooting|Troubleshooting Kibana in OpenShift}}
* https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#troubleshooting-kibana
* OpenShift Kibana login fails with redirect loop or Unable to connect https://access.redhat.com/solutions/2909691


=Cases=
=Cases=

Revision as of 19:20, 13 October 2017

Internal

Overview

The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here:

Change the Log Level for OpenShift Processes

--loglevel=10 seems to work fine.

Then tail the journalctl log. More details on getting logging information:

OpenShift Master and Node Process Logging

General Troubleshooting

Troubleshooting OpenShift Container Platform: Basics:

https://access.redhat.com/solutions/1542293

Master/Node Processes Troubleshooting

OpenShift Runtime

Metrics Troubleshooting

oadm diagnostics MetricsApiProxy

Troubleshooting Pods

oc -n <project-name> get pods
oc -n <project-name> describe po/<pod-name>

...
Name:			logging-fluentd-3kz30
...
Node:			node2/
...
Status:			Failed
Reason:			MatchNodeSelector
Message:		Pod Predicate MatchNodeSelector failed

In-line pod logs:

oc logs -f <pod-name>

Connect to a pod:

oc rsh <pod-name>

Troubleshooting Routes

Troubleshooting Logging

 oadm diagnostics AggregatedLogging

Troubleshooting Kibana

Troubleshooting Kibana in OpenShift

Cases