OpenShift Troubleshooting: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 7: | Line 7: | ||
The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here: | The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here: | ||
{{Internal| | {{Internal|OpenShift_Logging_Levels#Change_the_Log_Level_for_OpenShift_Processes|Change the Log Level for OpenShift Processes}} | ||
--loglevel=10 seems to work fine. | --loglevel=10 seems to work fine. |
Revision as of 23:47, 12 October 2017
Internal
Overview
The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here:
--loglevel=10 seems to work fine.
Then tail the journalctl log. More details on getting logging information:
General Troubleshooting
Troubleshooting OpenShift Container Platform: Basics:
Master/Node Processes Troubleshooting
- OpenShift Runtime
Metrics Troubleshooting
oadm diagnostics MetricsApiProxy
Troubleshooting Pods
oc -n <project-name> get pods
oc -n <project-name> describe po/<pod-name> ... Name: logging-fluentd-3kz30 ... Node: node2/ ... Status: Failed Reason: MatchNodeSelector Message: Pod Predicate MatchNodeSelector failed
In-line pod logs:
oc logs -f <pod-name>
Connect to a pod:
oc rsh <pod-name>
Troubleshooting Routes
Logging
Troubleshooting Kibana
Login loop troubleshooting:
- https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#troubleshooting-kibana
- OpenShift Kibana login fails with redirect loop or Unable to connect https://access.redhat.com/solutions/2909691