OpenShift Troubleshooting: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 54: | Line 54: | ||
=Troubleshooting Routes= | =Troubleshooting Routes= | ||
=Logging= | =Troubleshooting Logging= | ||
oadm diagnostics AggregatedLogging | |||
==Troubleshooting Kibana== | ==Troubleshooting Kibana== |
Revision as of 23:53, 12 October 2017
Internal
Overview
The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here:
--loglevel=10 seems to work fine.
Then tail the journalctl log. More details on getting logging information:
General Troubleshooting
Troubleshooting OpenShift Container Platform: Basics:
Master/Node Processes Troubleshooting
- OpenShift Runtime
Metrics Troubleshooting
oadm diagnostics MetricsApiProxy
Troubleshooting Pods
oc -n <project-name> get pods
oc -n <project-name> describe po/<pod-name> ... Name: logging-fluentd-3kz30 ... Node: node2/ ... Status: Failed Reason: MatchNodeSelector Message: Pod Predicate MatchNodeSelector failed
In-line pod logs:
oc logs -f <pod-name>
Connect to a pod:
oc rsh <pod-name>
Troubleshooting Routes
Troubleshooting Logging
oadm diagnostics AggregatedLogging
Troubleshooting Kibana
Login loop troubleshooting:
- https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#troubleshooting-kibana
- OpenShift Kibana login fails with redirect loop or Unable to connect https://access.redhat.com/solutions/2909691