OpenShift Installation Validation: Difference between revisions
Line 107: | Line 107: | ||
oc get pods -l 'component=es' | oc get pods -l 'component=es' | ||
oc logs -f logging-es-3fs5ghyo-3-88749 | oc logs -f logging-es-3fs5ghyo-3-88749 | ||
or | |||
oc logs -f logging-es-data-master-ebitzkr3-1-x0jch | |||
==fluentd== | ==fluentd== |
Revision as of 02:44, 11 November 2017
External
Internal
Connect to the Support Node
As "ansible":
On All Nodes
OpenShift Packages
ansible nodes -m shell -a "yum list installed | grep openshift"
The desired OpenShift version must be installed.
OpenShift Version
ansible nodes -m shell -a "/usr/bin/openshift version" master1.local | SUCCESS | ... openshift v3.5.5.26 kubernetes v1.5.2+43a9be4 etcd 3.1.0
Exported Filesystems
On the support node run exportfs and make sure the following filesystems are exported:
exportfs
/nfs 192.168.122.0/255.255.255.0 /nfs/registry <world> /nfs/metrics <world> /nfs/logging <world> /nfs/logging-es-ops <world> /nfs/etcd <world>
On Masters
On each master node, run as root:
oc get nodes --show-labels
Output example:
NAME STATUS AGE LABELS infranode1.openshift35.local Ready 17m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster=hadron,env=infra,kubernetes.io/hostname=infranode1.openshift35.local,logging-infra-fluentd=true,logging=true infranode2.openshift35.local Ready 17m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster=hadron,env=infra,kubernetes.io/hostname=infranode2.openshift35.local,logging-infra-fluentd=true,logging=true master1.openshift35.local Ready,SchedulingDisabled 17m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster=hadron,kubernetes.io/hostname=master1.openshift35.local,logging-infra-fluentd=true,logging=true,openshift_schedulable=False master2.openshift35.local Ready,SchedulingDisabled 17m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster=hadron,kubernetes.io/hostname=master2.openshift35.local,logging-infra-fluentd=true,logging=true,openshift_schedulable=False master3.openshift35.local Ready,SchedulingDisabled 17m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster=hadron,kubernetes.io/hostname=master3.openshift35.local,logging-infra-fluentd=true,logging=true,openshift_schedulable=False node1.openshift35.local Ready 17m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster=hadron,env=app,kubernetes.io/hostname=node1.openshift35.local,logging-infra-fluentd=true,logging=true node2.openshift35.local Ready 17m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster=hadron,env=app,kubernetes.io/hostname=node2.openshift35.local,logging-infra-fluentd=true,logging=true
Web Console
At this point the web console should be exposed on the external interface.
Use the administrative user defined as part of your "identity provider" declaration.
Verify etcd
On nodes that run etcd, as root:
etcdctl cluster-health etcdctl member list
Docker Logs
Log into a few nodes and take a look at the docker logs:
journalctl -f -u docker
Docker Startup Paramenters
From the support/installation server, execute as "ansible":
ansible nodes -m shell -a "ps -ef | grep dockerd | grep -v grep"
Make sure "--selinux-enabled" and "--insecure-registry 172.30.0.0/16" are present.
Logging
ElasticSearch
ElasticSearch should be deployed, running, and operational - logs must not contain errors:
oc project logging oc get pods -l 'component=es' oc logs -f logging-es-3fs5ghyo-3-88749
or
oc logs -f logging-es-data-master-ebitzkr3-1-x0jch
fluentd
All nodes should run a fluentd pod, and the fluentd pods should be operational, their logs must not contain errors.
oc project logging oc get pods -l 'component=fluentd' NAME READY STATUS RESTARTS AGE logging-fluentd-2r4rt 1/1 Running 0 49m logging-fluentd-37d72 1/1 Running 0 35m logging-fluentd-4ljkn 1/1 Running 0 3h logging-fluentd-74l39 1/1 Running 0 3h logging-fluentd-7l25h 1/1 Running 0 3h logging-fluentd-sbh7r 1/1 Running 0 3h logging-fluentd-w4shg 1/1 Running 0 39m
oc logs -f logging-fluentd-2r4rt ...
Kibana
ElasticSearch should be deployed, running, and operational - logs must not contain errors:
oc project logging oc get pods -l 'component=kibana' oc logs -f -c kibana-proxy logging-kibana-10-sb7sk oc logs -f -c kibana logging-kibana-10-sb7sk
The Logging Portal
The logging portal should be available:
https://kibana.apps.openshift.novaordis.io/
Metrics
oc project openshift-infra oc get pods NAME READY STATUS RESTARTS AGE hawkular-cassandra-1-pgd97 1/1 Running 0 40m hawkular-metrics-zl9n5 1/1 Running 0 40m heapster-2ngln 1/1 Running 0 40m
https://hawkular-metrics.apps.openshift.novaordis.io/hawkular/metrics