OpenShift Troubleshooting: Difference between revisions
Jump to navigation
Jump to search
(→Cases) |
|||
Line 60: | Line 60: | ||
==Troubleshooting Kibana== | ==Troubleshooting Kibana== | ||
{{Internal|Kibana and OpenShift#Troubleshooting|Troubleshooting Kibana in OpenShift}} | |||
=Cases= | =Cases= |
Revision as of 19:20, 13 October 2017
Internal
Overview
The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here:
--loglevel=10 seems to work fine.
Then tail the journalctl log. More details on getting logging information:
General Troubleshooting
Troubleshooting OpenShift Container Platform: Basics:
Master/Node Processes Troubleshooting
- OpenShift Runtime
Metrics Troubleshooting
oadm diagnostics MetricsApiProxy
Troubleshooting Pods
oc -n <project-name> get pods
oc -n <project-name> describe po/<pod-name> ... Name: logging-fluentd-3kz30 ... Node: node2/ ... Status: Failed Reason: MatchNodeSelector Message: Pod Predicate MatchNodeSelector failed
In-line pod logs:
oc logs -f <pod-name>
Connect to a pod:
oc rsh <pod-name>
Troubleshooting Routes
Troubleshooting Logging
oadm diagnostics AggregatedLogging