OpenShift Service Concepts

From NovaOrdis Knowledge Base
Revision as of 19:04, 26 February 2018 by Ovidiu (talk | contribs) (→‎Overview)
Jump to navigation Jump to search

External

Internal

Overview

A service represents a group of pods, which may individually come and go, and its primary function is to provide the permanent IP, hostname and port for other applications to use. Each service has a service IP address and a port, which is allocated from the services subnet. The service IP address is sometimes referred to as the "Cluster IP", which is consistent with the abstraction provided by the service: a cluster of OpenShift-managed pods providing the service. The services constitute the service layer.

If the service definition includes a selector, that selector is used to identify the pods that are part of the service's "cluster". The EndpointsController system component associates a service with the endpoints of pods that match the selector. Once the association is done, the actual physical network streaming is performed by a service proxy. More about endpoints is available here: service endpoints.

In some special cases, the service can represent a set of pods running outside the current project, or an instance running outside OpenShift altogether. Services representing external resources do not require associated pods, hence do not need a selector. They are called external services. If the selector is not set, the EndpointsController ignores the services, and endpoints can be specified manually. The procedure to declare services from another projects or external service is available here:

Integrating External Services

The service IP is available internally in the cluster. It is not routable (in the IP sense) externally. The service can be exposed externally via a route.

A service resource is an abstraction that defines a logical set of pods and a policy that is used to access the pods. The service layer is how applications communicate with one another. The service is a Kubernetes concept. The service serves as an internal load balancer: it identifies a set of replicated pods and then proxies the connections it receives to those pods (routers provide external load balancing). The service is not a "thing", but an entry in the configuration. Backing pods can be added or removed to or from the service arbitrarily. This way, anything that depends on the service can refer to it as a consistent IP:port pair. The services uses a label selector to find all the running containers associated with it.

Services can be consumed from other pods using the values of the <SERVICE>_HOST environment variables that are injected by the cluster.

Service Definition

Services of a project can be displayed with:

oc get all
oc get svc

Service Endpoint

An <pod IP address>:<port> pair that corresponds to a process running inside of a container, on a pod, on a node. The process that associates services and endpoints is performed by the EndpointController and it is described above. The service endpoint coordinate can be obtained by executing:

oc describe service <service-name>
Name:			logging-kibana
...
Endpoints:		10.129.2.17:3000
...

Service Proxy

The service proxy is a simple network proxy that represents the services defined in the API on the node. Each node runs a service proxy instance. The service proxy does simple TCP and UDP stream forwarding across a set of backends.

Service Dependencies

TODO: It seems there's a way to express dependencies between services:

apiVersion: v1
kind: Service
metadata:
  name: jenkins
  annotations:
    service.alpha.openshift.io/dependencies: '[{"name": "jenkins-jnlp", "namespace": "", "kind": "Service"}]'
    ...
spec:
    ...

Clarify this.

TO PROCESS: https://blog.giantswarm.io/wait-for-it-using-readiness-probes-for-service-dependencies-in-kubernetes/

Relationship between Services and the Pods providing the Service