OpenShift Troubleshooting: Difference between revisions
Jump to navigation
Jump to search
Line 20: | Line 20: | ||
{{External|https://access.redhat.com/solutions/1542293}} | {{External|https://access.redhat.com/solutions/1542293}} | ||
=Master/Node Processes Troubleshooting= | |||
{{External|OpenShift Runtime|OpenShift Runtime}} | |||
=Metrics Troubleshooting= | =Metrics Troubleshooting= |
Revision as of 23:37, 12 October 2017
Internal
Overview
The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here:
--loglevel=10 seems to work fine.
Then tail the journalctl log. More details on getting logging information:
General Troubleshooting
Troubleshooting OpenShift Container Platform: Basics:
Master/Node Processes Troubleshooting
- OpenShift Runtime
Metrics Troubleshooting
oadm diagnostics MetricsApiProxy
Troubleshooting Pods
oc -n <project-name> get pods
oc -n <project-name> describe po/<pod-name> ... Name: logging-fluentd-3kz30 ... Node: node2/ ... Status: Failed Reason: MatchNodeSelector Message: Pod Predicate MatchNodeSelector failed
In-line pod logs:
oc logs -f <pod-name>
Connect to a pod:
oc rsh <pod-name>
Troubleshooting Routes
Logging
Troubleshooting Kibana
Login loop troubleshooting:
- https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#troubleshooting-kibana
- OpenShift Kibana login fails with redirect loop or Unable to connect https://access.redhat.com/solutions/2909691