OpenShift Troubleshooting: Difference between revisions
Jump to navigation
Jump to search
Line 89: | Line 89: | ||
{{Internal|Kibana and OpenShift#Troubleshooting|Troubleshooting Kibana in OpenShift}} | {{Internal|Kibana and OpenShift#Troubleshooting|Troubleshooting Kibana in OpenShift}} | ||
=Troubleshooting Metrics= | |||
oadm diagnostics MetricsApiProxy | |||
=Cases= | =Cases= |
Revision as of 03:03, 17 October 2017
Internal
Overview
The general technique is to increase the logging level of various OpenShift master (api, controllers) and node processes as described here:
--loglevel=10 seems to work fine.
Then tail the journalctl log. More details on getting logging information:
General Troubleshooting
Troubleshooting OpenShift Container Platform: Basics:
Master/Node Processes Troubleshooting
- OpenShift Runtime
Troubleshooting Pods
oc -n <project-name> get pods
oc -n <project-name> describe po/<pod-name> ... Name: logging-fluentd-3kz30 ... Node: node2/ ... Status: Failed Reason: MatchNodeSelector Message: Pod Predicate MatchNodeSelector failed
In-line pod logs:
oc logs -f <pod-name>
Connect to a pod:
oc rsh <pod-name>
Connect to a Pod as Root
Log into the physical node running the pod. The pod - node association can be inferred executing
oc describe <pod-name>
and extracting the "Node:" value.
On the node, run
docker ps
and identify the container ID.
Then
docker exec -u 0 -it 3152a2509d92 /bin/bash
For more details on docker exec, see:
As root, various diagnostic utilities can be temporarily installed:
yum install bind-utils
Copy Files into Pods
For more details on oc cp:
Troubleshooting Routes
Troubleshooting Logging
oadm diagnostics AggregatedLogging
Troubleshooting Kibana
Troubleshooting Metrics
oadm diagnostics MetricsApiProxy