OpenShift Troubleshooting: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
No edit summary
Line 14: Line 14:


{{Internal|OpenShift_Logging#Master_and_Node_Process_Logging|OpenShift Master and Node Process Logging}}
{{Internal|OpenShift_Logging#Master_and_Node_Process_Logging|OpenShift Master and Node Process Logging}}
=General Troubleshooting=
[https://access.redhat.com/solutions/1542293|Troubleshooting OpenShift Container Platform: Basics]


=Metrics Troubleshooting=
=Metrics Troubleshooting=

Revision as of 23:30, 12 October 2017

Internal

Overview

The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here:

Change the Log Level for OpenShift Processes

--loglevel=10 seems to work fine.

Then tail the journalctl log. More details on getting logging information:

OpenShift Master and Node Process Logging

General Troubleshooting

OpenShift Container Platform: Basics

Metrics Troubleshooting

oadm diagnostics MetricsApiProxy

Troubleshooting Pods

oc -n <project-name> get pods
oc -n <project-name> describe po/<pod-name>

...
Name:			logging-fluentd-3kz30
...
Node:			node2/
...
Status:			Failed
Reason:			MatchNodeSelector
Message:		Pod Predicate MatchNodeSelector failed

In-line pod logs:

oc logs -f <pod-name>

Connect to a pod:

oc rsh <pod-name>

Troubleshooting Routes

Logging

OpenShift Logging

Troubleshooting Kibana

Login loop troubleshooting:

Cases