OpenShift Service Concepts
External
- https://kubernetes.io/docs/concepts/services-networking/service/
- https://docs.openshift.com/container-platform/latest/architecture/core_concepts/pods_and_services.html#services
Internal
Overview
A service represents functionality provided by the OpenShift cluster and that is exposed as a permanent IP, hostname and port, for other applications to use. There is no actual process running on a node that can be identified as a service; a service is rather state in the API server. In the vast majority of the cases, the functionality is provided by a group of pods managed by OpenShift. The pods provide equivalent, replicated functionality, and they may individually come and go. This situation provides the main use case for services, as described in the "Need for Services" section. The relationship between a pod and the service is defined by a label selector declared by the service:
apiVersion: v1 kind: Service spec: [...] selector: key1: value1 key2: value2
A pod will be identified as being part of the service "cluster" if the label selector declared by the service matches the pod.
In some special cases, the service can represent a set of pods running outside the current project, or processes running outside OpenShift altogether. This is known as an external service.
The services constitute the service layer, which is how how applications communicate with one another.
The service serves as an internal load balancer: it identifies a set of replicated pods and then proxies the connections it receives to those pods (routers provide external load balancing). The service is not a "thing", but an entry in the configuration. Backing pods can be added or removed to or from the service arbitrarily. This way, anything that depends on the service can refer to it as a consistent IP:port pair. The services uses a label selector to find all the running containers associated with it.
Services can be consumed from other pods using the values of the <SERVICE>_HOST environment variables that are injected by the cluster.
Services of a project can be displayed with:
oc get all oc get svc
Need for Services
Pods are transient, they may be started, and more importantly, die at any time, and the IP addresses their endpoints are accessible at are dynamically allocated from the cluster network. This dynamic behavior is hidden behind the service abstraction, that provides the appearance of a stable, consistent service backed by a permanent IP address. A client that needs the functionality provided by the service will contact the well-known address - the service's address - and the service will proxy the call, via a service proxy, to one of the actual pods providing the service. From this perspective, the service acts as a load balancer.
Service Address
Each service has a service IP address and a port, which is allocated from the services subnet. The service IP address is sometimes referred to as the "Cluster IP", which is consistent with the abstraction provided by the service: a cluster of OpenShift-managed pods providing the service:
apiVersion: v1 kind: Service spec: [...] clusterIP: 172.30.73.18 ports: - name: 80-tcp port: 80 protocol: TCP targetPort: 80
The service IP is only available internally in the cluster. It is not routable (in the IP sense) externally. The service can be exposed externally via a route.
Service Endpoints
The EndpointsController component of the API server evaluates its services' selectors continuously, and the information about pods that match those selectors are published as part of the state of an API primitive named Endpoints. The name of the Endpoints object is the same as the name of the corresponding Service:
oc get -o yaml endpoints/serviceA
apiVersion: v1 kind: Endpoints metadata: name: serviceA subsets: - addresses: - ip: 10.130.1.0 nodeName: node1 targetRef: kind: Pod name: serviceA-1-gr7rh ports: - name: 80-tcp port: 80 protocol: TCP
EndpointsController
The EndpointsController system component associates a service with the endpoints of pods that match the selector. Once the association is done, the actual physical network streaming is performed by a service proxy. More about endpoints is available here: service endpoints.
Service Endpoint
An <pod IP address>:<port> pair that corresponds to a process running inside of a container, on a pod, on a node. The process that associates services and endpoints is performed by the EndpointController and it is described above. The service endpoint coordinate can be obtained by executing:
oc describe service <service-name>
Name: logging-kibana ... Endpoints: 10.129.2.17:3000 ...
Service Proxy
The service proxy is a simple network proxy that represents the services defined in the API on the node. Each node runs a service proxy instance. The service proxy does simple TCP and UDP stream forwarding across a set of backends.
Service Dependencies
TODO: It seems there's a way to express dependencies between services:
apiVersion: v1 kind: Service metadata: name: jenkins annotations: service.alpha.openshift.io/dependencies: '[{"name": "jenkins-jnlp", "namespace": "", "kind": "Service"}]' ... spec: ...
Clarify this.
TO PROCESS: https://blog.giantswarm.io/wait-for-it-using-readiness-probes-for-service-dependencies-in-kubernetes/
Relationship between Services and the Pods providing the Service
External Service
In some special cases, the service can represent a set of pods running outside the current project, or an instance running outside OpenShift altogether. Services representing external resources do not require associated pods, hence do not need a selector. They are called external services. If the selector is not set, the EndpointsController ignores the services, and endpoints can be specified manually. The procedure to declare services from another projects or external service is available here: