OpenShift Concepts TODEPLETE: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
No edit summary
 
Line 1: Line 1:
=DEPLETE TO=
{{Internal|OpenShift Concepts|OpenShift Concepts}}
=External=
* https://docs.openshift.com/container-platform/latest/welcome/index.html
=Internal=
=Internal=
* [[OpenShift]]
* [[Kubernetes]]
* [[Docker]]
=OpenShift Workflow=
Users or automation make calls to the [[#API|REST API]] via the command line, web console or programmatically, to change the state of the system. The master analyses the state changes and acts to bring the state of the system in sync with the desired state. The state is maintained in the [[#Store_Layer|store layer]]. Most OpenShift commands and API calls do not require the corresponding actions to be performed immediately. They usually create or modify a resource description in [[#etcd|etcd]]. etcd then notifies Kubernetes or the OpenShift [[OpenShift_Pod_Concepts#Controller|controllers]], which notify the resource about the change. Eventually, the system state reflects the change.
OpenShift functionality is provided by the interaction of several layers:
* <span id='Store_Layer'></span>The '''store layer''' holds the state of the OpenShift environment, which includes configuration information, current state and the desired state. It is implemented by [[#etcd|etcd]].
* <span id='Authentication_Layer'></span>The [[OpenShift Security Concepts#Authentication_Layer|authentication layer]] provides a framework for collaboration and quota management.
* Scheduling is OpenShift master's main function and it is implemented in the <span id='Scheduling_Layer'>'''scheduling layer''', which contains the [[#Scheduler|scheduler]].
* <span id='Service_Layer'></span>The '''service layer''' handles internal requests. It provides an abstraction of [[#Service_Endpoint|service endpoints]] and provides internal load balancing. The [[#Pod|pods]] get [[#Service_IP_Address|IP addresses]] associated with  [[#Service_Endpoint|the service endpoints]]. Note that the external requests are not handled by the service layer, but by the [[#Routing_Layer|routing layer]].
* The [[#Routing_Layer|routing layer]] handles external requests, to and from the applications. For a description of the relationship between a [[#Service|service]] and a [[#Router|router]], see [[#Relationship_between_Service_and_Router|relationship between a service and router]].
* Application requests are routed to the business logic serving them according to the [[#Networking_Workflow|networking workflow]], by the <span id='Networking_Layer'></span>'''networking layer'''.
* <span id='Replication_Layer'></span>The '''replication layer''' contains the [[#Replication_Controller|replication controller]], whose role is to ensure that the number of instances and pods defined in the [[#etc|store layer]] actually exists.
=<span id='OpenShift_Component_Types'></span><span id='Component_Types'></span>Objects=
==Core Objects==
Defined by Kubernetes, and also used by OpenShift. All these objects have an API type, which is listed below.
* [[OpenShift_Concepts#Project|Projects]] ([[Oc_get#project|oc get project]]), defined by <tt>[[OpenShift_Project_Definition|Project]]</tt>.
* [[OpenShift_Concepts#Pod|Pods]] ([[Oc_get#pod.2C_pods.2C_po|oc get pods]]) defined by <tt>[[OpenShift_Pod_Definition|Pod]]</tt>.
* [[OpenShift_Concepts#Node|Nodes]] ([[Oc_get#node.2C_nodes|oc get nodes]]) defined by <tt>[[OpenShift_Node_Definition|Node]]</tt>.
* [[OpenShift_Concepts#DeploymentConfig|Deployment Configurations]] ([[Oc_get#dc|oc get dc]]) defined by  <tt>[[OpenShift DeploymentConfig Definition#Overview|DeploymentConfig]]</tt>.
* [[OpenShift_Concepts#Service|Services]] ([[Oc_get#services.2C_svc|oc get svc]]) defined by <tt>[[OpenShift Service Definition#Overview|Service]]</tt>.
* [[OpenShift_Concepts#Route|Routes]] ([[Oc_get#routes.2C_routes|oc get route]]) defined by <tt>[[OpenShift Route Definition#Overview|Route]]</tt>.
* [[OpenShift_Concepts#Replication_Controller|Replication Controllers]] ([[Oc_get#rc|oc get rc]]) defined by <tt>[[OpenShift ReplicationController Definition#Overview|ReplicationController]]</tt>.
* [[OpenShift_Security_Concepts#Secret|Secrets]] ([[Oc_get#secret|oc get secret]]) defined by <tt>[[OpenShift Secret Definition|Secret]]</tt>.
* [[OpenShift_Concepts#DaemonSet|Daemon Sets]].
==OpenShift Objects==
Defined by OpenShift, outside Kubernetes. These objects also have an API type, which is listed below.


* [[OpenShift#Subject|OpenShift]]
* [[OpenShift_Concepts#Image|Images]] defined by <tt>[[OpenShift_Image_Definition#Overview|Image]]</tt>.
* [[#Image_Stream|Image Streams]] ([[Oc_get#is|oc get is]]) defined by <tt>[[OpenShift Image Stream Definition#Overview|ImageStream]]</tt>.
* [[OpenShift_Concepts#Image_Stream_Tag|Image Stream Tags]] ([[Oc_get#istag|oc get istag]]).
* [[OpenShift_Concepts#Build|Builds]] ([[Oc_get#build.2C_builds|oc get build]]).
* [[OpenShift_Concepts#Build_Configuration|Build Configurations]] ([[Oc_get#bc|oc get bc]]) defined by <tt>[[OpenShift Build Configuration Definition#Overview|BuildConfig]]</tt>.
* [[#Template|Templates]] defined by <tt>[[OpenShift Template Definition|Template]]</tt>.
* [[OpenShift_Security_Concepts#OAuthClient|OAuthClient]].


=Overview=
==Other Objects that do not have an API Representation==


OpenShift is supported anywhere RHEL is: bare metal, virtualized infrastructure (Red Hat Virtualization, vSphere, Hyper-V), OpenStack platform, public cloud providers (Amazon, Google, Azure). It runs on RHEL and Red Hat Atomic.
* [[OpenShift_Concepts#Container|Containers]]
* [[OpenShift_Concepts#Label|Labels]]
* [[OpenShift_Concepts#Volume|Volumes]]


=OpenShift Hosts=
=OpenShift Hosts=
Line 11: Line 63:
==Master==
==Master==


A master is a RHEL or Red Hat Atomic host that orchestrates and schedules resources. It maintains the state of the OpenShift environment. Multiple masters can be present to insure HA.
A master is a RHEL or Red Hat Atomic host that ''[[Kubernetes Concepts#Overview|orchestrates]]'' and ''schedules'' resources.  
 
The master ''maintains the state'' of the OpenShift environment.  


The master provides the single API all tooling clients must interact with.  
The master provides the ''single API'' all tooling clients must interact with. All OpenShift tools (CLI, web console, IDE plugins, etc. speak directly with the master).


The access is protected via fine-grained role-based access control (RBAC).
The access is protected via fine-grained role-based access control (RBAC).


The master monitors application health via user-defined [[#Pod_Probe|pod probes]]. It handles restarting [[#Pod|pods]] that failed probes automatically. Pods that fail too often are marked as "failing" and are temporarily not restarted. The OpenShift [[#Service|service layer]] sends traffic only to healthy pods.
The master monitors application health via user-defined [[#Pod_Probe|pod probes]], insuring that all containers that should be running are running. It handles restarting [[#Pod|pods]] that failed probes automatically. Pods that fail too often are marked as "failing" and are temporarily not restarted. The OpenShift [[#Service_Layer|service layer]] sends traffic only to healthy pods.
 
{{Internal|Kubernetes Concepts#Master|Kubernetes Master}}
 
Masters use the [[#etcd|etcd]] cluster to store state, and also [[#etcd_and_Master_Caching|cache some of the metadata]] in their own memory space.
 
===Master HA===
 
Multiple masters can be present to insure HA. A typical HA configuration involves three masters and three [[#Etcd|etcd]] nodes. Such a topology is built by the OpenShift 3.5 ansible inventory file shown here.
 
An alternative HA configuration consists in a single master node and multiple (at least three)  [[#Etcd|etcd]] nodes.
 
====The native HA Method====
 
===Master API===
 
The master API is available internally (inside the OpenShift cluster) at https://openshift.default.svc.cluster.local This value is available internally to pods as the KUBERNETES_SERVICE_HOST environment variable.


==Node==
==Node==


A node is a RHEL or Red Hat Atomic Host where applications run inside [[#Container|containers]]. Nodes are orchestrated by [[#Master|masters]]. [[#The_node_Daemon|The node daemon]] runs on node.
A node is a ''Linux container host''. It is based on RHEL or Red Hat Atomic and provides a runtime environment where applications run inside [[#Container|containers]], which run inside [[#Pod|pods]] assigned by the [[#Master|master]]. Nodes are orchestrated by [[#Master|masters]] and are usually managed by administrators and not by end users. Nodes can be organized into different topologies. From a networking perspective, nodes gets allocated their own subnets of the [[#The_Cluster_Network|cluster network]], which then they use to distribute to the pods, and containers, running on them. More details about OpenShift networking is available [[#The_Cluster_Network|here]].
 
[[OpenShift Concepts#The_node_Daemon|A node daemon]] runs on node each node.
 
Node operations:
* [[OpenShift_Node_Operations#Getting_Information_about_a_Node|Getting information about a node]]
* [[OpenShift_Node_Operations#Starting.2FStopping_a_Node|Starting/Stopping a node]]
 
<font color=red>
TODO:
* What is the difference between the kubelet and the node daemon?
* kube proxy daemons.
</font>
 
==Infrastructure Node==
 
Also referred to as "infra node".
 
This is where infrastructure pods run. [[#Metrics|Metrics]], [[#Logging|logging]], [[#Router|routers]] are considered infrastructure pods.
 
The infrastructure nodes, especially those running the the metrics pods and ... should be closely monitored to detect early CPU, memory and disk capacity shortages on the host system.
 
=OpenShift Cluster=
 
All nodes that share the [[#SDN|SDN]].


=Container=
=Container=
{{External|https://docs.openshift.com/container-platform/latest/architecture/core_concepts/containers_and_images.html#containers}}
A container is a kernel-provided mechanism to run one or more processes, in a portable manner, in a Linux environment. Containers are isolated from each other on a host and are initialized from [[#Image|images]]. All application instances - based on various languages/frameworks/runtimes - as well as databases, run inside containers on [[#Node|nodes]]. For more details, see {{Internal|Docker_Concepts#Container|Docker Containers}}
A [[#Pod|pod]] can have application containers and [[OpenShift_Init_Container#Overview|init containers]].
In OpenShift, containers are never restarted. Instead, new containers are spun up to replace old containers when needed. Because of this behavior, [[OpenShift_Concepts#Persistent_Volume|persistent storage volumes]] mounted on containers are critical for maintaining state such as for configuration files and data files.
=Docker Support in OpenShift=
All containers that are running in pods are managed by [[Docker_Concepts#The_Docker_Server|Docker servers]].
==Docker Storage in OpenShift==
Each Docker server requires Docker storage, which is allocated on each node in a space specially provisioned as [[Docker_Concepts#Storage_Backend|Docker storage backend]]. The Docker storage is ephemeral and separate from [[OpenShift_Volume_Concepts#The_Volume_Mechanism|OpenShift storage]]. The [[Docker_Concepts#Loopback_Storage|default loopback storage]] back end for Docker is a [[Linux Logical Volume Management Concepts#thin_pool|thin pool]] on loopback devices, which is not appropriate for production.
As per OCP 3.7, Docker storage ca be backed by one of two options (storage drivers): a [[Docker Concepts#devicemapper-storage-driver|devicemapper]] thin pool logical volume  and an [[Docker Concepts#overlayfs-storage-driver|overlay2]] filesystem (recommended). More details about how Docker storage is configured on a specific system can be obtained from:
cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker_vg-container--thinpool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true"
and also


All application instances run inside containers on the [[#Node|nodes]]. For more details, see [[Docker_Concepts#Container|Docker Containers]].
docker info


=The node Daemon=
=The node Daemon=


=Pod=
=<span id='Controller'></span><span id='Pod_Configuration'></span><span id='Pod_Lifecycle'></span><span id='Terminal_State'></span><span id='Pod_Definition'></span><span id='Pod_Name'></span><span id='Pod_Definition_File'></span><span id='Pod_Placement'></span><span id='Pod_Probe'></span><span id='Liveness_Probe'></span><span id='Readiness_Probe'></span><span id='Local_Manifest_Pod'></span><span id='Bare_Pod'></span><span id='Static_Pod'></span><span id='Pod_Type'></span><span id='Terminating'></span><span id='NonTerminating'></span><span id='Pod_Presets'></span><span id='Init_Container'>Pod=
 
Contains: controller, pod configuration, pod lifecycle, terminal state, pod definition, pod name, pod placement, pod probe, liveness probe, readiness probe, local manifest pod, bare pod, static pod, pod type, terminating, nonterminating, pod presents, init container.
 
=Label=
 
Labels are simple key/value pairs that can be assigned to any resource in the system and are used to group and select arbitrarily related objects. Most Kubernetes objects can include labels in their metadata. Labels provide the default way of manage objects as ''groups'', instead of having to handle each object individually. Labels are a [[Kubernetes Concepts#Label|Kubernetes concept]].
 
[[OpenShift Get Labels Applied to a Node|Labels associated with a node]] can be obtained with [[oc_describe#node|oc describe node]].
 
=Selector=
<span id='Label_Selector'></span>
 
A ''selector'' is a set of labels. It is also referred to as ''label selector''. Selectors are a [[Kubernetes Concepts#Selector|Kubernetes concept]].
 
==Node Selector==
 
A ''node selector'' is an expression applied to a [[#Node|node]], which, depending on whether the node has or does not have the labels expected by the node selector, allows or prevents [[#Pod_Placement|pod placement]] on the node in question during the pod scheduling operation. It is the [[#Scheduler|scheduler]] that evaluates the node selector expression and decides on which node to place the pod. The [[#DaemonSet|DaemonSets]] also use node selectors when placing the associated pods on nodes.
 
Node selectors can be associated with [[#Cluster-Wide_Default_Node_Selector|an entire cluster]], with a project, or with a specific pod. The node selectors can be modified as part of a [[OpenShift Node Selector Operations|node selector operation]].
 
===Cluster-Wide Default Node Selector===
 
The cluster-wide default node selector is configured during OpenShift cluster installation to restrict pod placement on specific nodes. It is specified in the [[Master-config.yml#defaultNodeSelector|projectConfig.defaultNodeSelector]] section of the master configuration file [[master-config.yml]]. It can also be modified after installation with the following procedure:
 
{{Internal|OpenShift_Node_Selector_Operations#Configuring_a_Cluster-Wide_Default_Node_Selector|Configure a Cluster-Wide Default Node Selector}}
 
===Per-Project Node Selector===
 
The per-project node selector is used by the scheduler to schedule pods associated with the project. The per-project node selector and takes precedence over [[OpenShift_Concepts#Cluster-Wide_Default_Node_Selector|cluster-wide default node selector]], when both exist. It is available as "openshift.io/node-selector" project metadata (see below). If "openshift.io/node-selector" is set to an empty string, the project will not have an adminstrator-set node selector, even if the [[#Cluster-Wide_Default_Node_Selector|cluster-wide default]] has been set. This means that a cluster administrator can set a default to restrict developer projects to a subset of nodes and still enable infrastructure or other projects to schedule the entire cluster.
 
The per-project node selector value can be queried with:
 
[[oc_get#project|oc get project -o yaml]]
 
It is listed as:
 
...
kind: Project
<b>metadata</b>:
  <b>annotations</b>:
    ...
    <b>openshift.io/node-selector</b>: ""
    ...
 
The per-project node selector is usually set up when the project is created, as described in this procedure:
 
{{Internal|OpenShift_Node_Selector_Operations#When_the_Project_is_Created|Configuring a Per-Project Node Selector during Project Creation}}
 
It can also be changed after project creation with <tt>oc edit</tt> as described in this procedure:
 
{{Internal|OpenShift_Node_Selector_Operations#After_the_Project_was_Created|Configuring a Per-Project Node Selector after Project Creation}}
 
===Per-Pod Node Selector===
 
The declaration of a ''per-pod node selector'' can be obtained running:
 
oc get pod <''pod-name''> -o yaml
 
and it is rendered in the "spec:" section of the pod definition:
 
...
kind: Pod
<b>spec</b>:
  ...
  <b>nodeSelector</b>:
    key-A: value-A
 
 
<font color=red>How is the node selector of a pod generated?</font>
 
<font color=red>TODO: merge with [[OpenShift_Concepts#Pod_Placement]]</font>
 
Once the pod has been created, the node selector value becomes immutable and an attempt to change it will fail. For more details on pod state see [[OpenShift_Concepts#Pod|Pods]].
 
===Precedence Rules when Multiple Node Selectors Apply===
 
<font color=red>TODO</font>
 
===External Resources===
 
* https://docs.openshift.com/container-platform/3.5/admin_guide/managing_projects.html#using-node-selectors
 
=Scheduler=
 
{{External|https://docs.openshift.com/container-platform/3.5/admin_guide/scheduler.html}}
 
Scheduling is the master's main function: when a pod is created, the master determines on what node(s) to execute the pod. This is called ''scheduling''. The layer that handles this responsibility is called the [[#Scheduling_Layer|scheduling layer]].
 
The ''scheduler'' is a component that runs on master and determines the best fit for running [[#Pod|pods]] across the environment. The scheduler also spreads pod replicas across nodes, for application HA. The scheduler reads data from the [[#Pod_Definition|pod definition]] and tries to find a node that is a good fit based on configured policies. The scheduler does not modify the pod, it creates a binding that ties the pod to the selected node, via the master API.
 
The OpenShift scheduler is based on [[Kubernetes Concepts#Scheduler|Kubernetes scheduler]].
 
Most OpenShift pods are scheduled by the scheduler, unless they are managed by a [[#DaemonSet|DaemonSet]]. In this case, the DaemonSet selects the node to run the pod, and the scheduler ignores the pod.
 
The scheduler is completely independent and exists as a standalone, pluggable solution. The scheduler is deployed as a container, referred to as an ''infrastructure container''. The functionality of the scheduler can be extended in two ways:
# Via enhancements, by adding predicates to the priority functions.
# Via replacement with a different implementation.
 
The pod placement process is described here: {{Internal|OpenShift_Pod_Concepts#Pod_Placement|Pod Placement}}
 
==Default Scheduler Implementation==
 
The default scheduler is a scheduling engine that selects the node to host the pod in three steps:
# Filter all available nodes by running through a list of filter functions called [[#Predicates|predicates]], discarding the nodes that do not meet the criteria.
# Prioritize the remaining nodes by passing through a series of [[#Priority_Functions|priority functions]] that assigns each node a score between 0 - 10. 10 signifies the best possible fit to run the pod. By default all priority function are considered equivalent, but they can be weighted differently via configuration.
# Sorts the node by score and selects the node with the highest score. If multiple nodes come with the same score, one is chosen at random.
 
Note that an insight into how the predicates are evaluated and what scheduling decisions are taken can be achieved by increasing the logging verbosity of the [[OpenShift_Runtime#master_controller_Daemon|master controllers processes]].
 
==Predicates==
 
===Static Predicates===
 
* PodFitsPorts - a node is fit fi there are no port conflicts.
* PodFitsResources - a node is fit based on resource availability. Nodes declare resource capacities, pods specify what resources they require.
* NoDiskConflict - evaluates if a pod fits based on volumes requested and those already mounted.
* MatchNodeSelector - a node is fit based on the node selector query.
* HostName -  a node is fit based on the presence of host parameter and string match with host name.
 
===Configurable Predicates===
 
====ServiceAffinity====
 
ServiceAffinity filters out nodes that do not belong to the topological level defined by the provided labels.  See [[/etc/origin/master/scheduler.json#serviceAffinity|scheduler.json]]
 
====LabelsPresence====
 
LabelsPresence checks whether the node has certain labels defined, regardless of value.
 
==Priority Functions==
 
===Existing Priority Functions===
 
* <span id="LeastRequestedPriority"></span>LeastRequestedPriority - favors nodes with fewer requested resources, calculates percentage of memory and CPU requested by pods scheduled on node, and prioritizes nodes with highest available capacity.
* BalancedResourceAllocation - favors nodes with balanced resource usage rate, calculates difference between consumed CPU and memory as fraction of capacity and prioritizes nodes with the smallest difference. It should always be used with [[#LeastRequestedPriority|LeastRequestedPriority]].
* ServiceSpreadingPriority - spreads pods by minimizing the number of pods that belong to the same service, onto the same node
* EqualPriority
 
===Configurable Priority Functions===
 
* ServiceAntiAffinity
* LabelsPreference


{{Internal|Kubernetes Concepts#Pod|Kubernetes Pods}}
==Scheduler Policy==


==Pod Probe==
The selection of the predicates and the priority functions defines the ''scheduler policy''. The default policy is configured in the master configuration file [[master-config.yml]] as [[Master-config.yml#schedulerConfigFile|kubernetesMasterConfig.schedulerConfigFile]]. By default, it points to [[/etc/origin/master/scheduler.json]]. The current scheduling policy in force can be obtained with <font color=red>?</font>. A custom policy can replace it, if necessary, by following this procedure: [[OpenShift Modify the Scheduler Policy|Modify the Scheduler Policy]].


Users can configure ''pod probes'' for liveness or readiness.
==Scheduler Policy File==


=Storage=
The default scheduler policy is configured in the master configuration file [[master-config.yml]] as [[Master-config.yml#schedulerConfigFile|kubernetesMasterConfig.schedulerConfigFile]], which by default points to [[/etc/origin/master/scheduler.json]]. Another example of scheduler policy file: {{Internal|Scheduler Policy File|Scheduler Policy File}}


==Volume==
=<span id='Storage'></span><span id='Volume'></span><span id='Persistent_Volume'></span><span id='Persistent_Volume_Claim'></span><span id='EmptyDir'></span><span id='Temporary_Pod_Volume'></span>Volumes=


{{Internal|Kubernetes Concepts#Volume|Kubernetes Volume}}
Persistent Volumes, Persistent Volume Claims, EmptyDirs, ConfigMaps, Downward API, Host Paths, Secrets: {{Internal|OpenShift Volume Concepts|Volumes}}


=etcd=
=etcd=


{{Internal|Kubernetes Concepts#etcd|Kubernetes etcd}}
Implements OpenShift's [[#Store_Layer|store layer]], which holds the current state, desired state and configuration information of the environment.
 
OpenShift stores image, build, and deployment ''metadata'' in etcd. Old resources must be periodically pruned. If a large number of images/build/deployments are planned, etcd must be placed on machines with large amounts of memory and fast SSD drives.
 
{{Internal|Etcd Concepts#Overview|etcd}}
 
==etcd and Master Caching==
 
[[#Master|Masters]] cache deserialized resource metadata to reduce the CPU load and keep the metadata in memory. For small clusters, the cache can use a lot of memory for negligible CPU reduction. The default cache size is 50,000 entries, which, depending on the size of resources, can grow to occupy 1 to 2 GB of memory. The cache size can be configured in [[Master-config.yml#deserialization-cache-size|master-config.yml]].
 
=Image Registries=
 
{{External|https://docs.openshift.com/container-platform/latest/install_config/registry/index.html}}
 
<span id='Docker_Registry'></span>
==Integrated Docker Registry==
 
OpenShift contains an integrated [[Docker_Concepts#Image_Registry|Docker image registry]], providing authentication and access control to images. The integrated registry should not be confused with the OpenShift Container Registry (OCR), which is a standalone solution, which can be used outside of an OpenShift environment. The integrated registry is an application deployed within the "[[#Default_Project|default]]" project as a  [[Docker_Concepts#Privileged_Container|privileged container]], as part of the cluster installation process. The integrated registry is available internally in the OpenShift cluster at the following [[#The_Service_IP_Address|Service IP]]: docker-registry.default.svc:5000/. It consists of a a "docker-registry" [[#Service|service]], a "docker-registry" [[#DeploymentConfig|deployment configuration]], a "registry" [[OpenShift_Security_Concepts#Service_Account|service account]] and a role binding that associates the service account  to the "system:registry" cluster role. [[OpenShift Registry Definition|More details]] on the structure of the integrated registry can be generated with [[oadm registry|oadm registry -o yaml]]. The default registry also comes with a [[#The_Registry_Console|registry console]] that can be used to browse images.
 
The main function of the integrated registry is to provide a default location where users can push images they build. Users push images into registry and whenever a new image is stored in the registry, the registry notifies OpenShift about it and passes along image information such as the [[#Namespace|namespace]], the name and the image metadata. Various OpenShift components react by creating new builds and deployments.
 
Alternatively, the integrated registry may store and expose to projects external images the projects may need. The docker servers on OpenShift nodes are configured to be able to access the internal registry, but they are also configured with [[#registry.access.redhat.com|registry.access.redhat.com]] and [[#docker.io|docker.io]].
 
Multiple running instances of the container registry require backend storage supporting writes by multiple processes. If the chosen infrastructure provider does not contain this ability, running a single instance of a container registry is acceptable.
 
<font color=red>
'''Questions''':
* How can I "instruct" an application to use a specific registry.  a “lab” application to use registry-lab
* How do pods that need images go to that registry and not other? The answer to this question lays in the way an application uses the registry.
</font>
 
==External Registries==
 
The external registries [[#registry.access.redhat.com|registry.access.redhat.com]] and [[#docker.io|docker.io]] are accessible. In general, any server implementing Docker registry API can be integrated as a [[#Stand-alone_Registry|stand-alone registry]].
 
====registry.access.redhat.com====
 
{{Internal|registry.access.redhat.com|registry.access.redhat.com}}
 
The Red Hat registry is available at http://registry.access.redhat.com. An individual image can be inspected with an URL similar to https://access.redhat.com/containers/#/registry.access.redhat.com/jboss-eap-7/eap70-openshift/images/1.5-23
 
====docker.io====
 
[[Docker Concepts#Docker_Hub|Docker Hub]] is available at http://dockerhub.com.
 
==Stand-alone Registry==
 
{{External|https://docs.openshift.com/container-platform/latest/install_config/install/stand_alone_registry.html#install-config-installing-stand-alone-registry}}


=Scheduler=
Any server implementing Docker registry API can be integrated as a standalone registry: a stand-alone container registry can be installed and used with the OpenShift cluster.


{{Internal|Kubernetes Concepts#Scheduler|Kubernetes Scheduler}}
==<span id=The_Registry_Console'></span>Integrated Registry Console==


=Docker Registry=
{{External|https://docs.openshift.com/container-platform/latest/install_config/registry/deploy_registry_existing_clusters.html#registry-console}}


OpenShift contains an integrated [[Docker_Concepts#Image_Registry|Docker registry]]. Users push images into registry and whenever a new image is stored in the registry, the registry notifies OpenShift about it and passes along image information such as the [[#Namespace|namespace]], the name and the image metadata.
The registry comes with a "registry console" that allows web-based access to the registry. An URL example is https://registry-console-default.apps.openshift.novaordis.io. The registry console is deployed as a [[OpenShift_Concepts#Default_Project|registry-console]] pod part of the "[[OpenShift_Concepts#Default_Project|default]]" project.


=Service=
==Registry Operations==


{{Internal|Kubernetes Concepts#Service |Kubernetes Service}}
{{Internal|OpenShift Registry Operations|Registry Operations}}


=Routing Layer=
=<span id='Service_Endpoint'></span><span id='Service_Proxy'></span><span id='Service_Dependencies'></span><span id='Relationship_between_Services_and_the_Pods_providing_the_Service'></span><span id='EndpointsController'></span>Service=


The routing layer cooperates with the service layer. It runs in pods and it provides automated load balancing to pods, and routing around unhealthy pods. The routing layer is pluggable and extensible.
{{Internal|OpenShift Service Concepts|OpenShift Service Concepts}}


=API=
=API=
Line 67: Line 373:
{{Internal|Kubernetes Concepts#API|Kubernetes API}}
{{Internal|Kubernetes Concepts#API|Kubernetes API}}


=Label=
=Networking=
 
In OpenShift, nodes communicate to each over using the infrastructure-provided network. Pods communicate with each other via a software defined network (SDN), also known as the cluster network, which runs in top of that.
 
==Required Network Ports==
 
{{External|https://docs.openshift.com/enterprise/3.1/install_config/install/prerequisites.html#prereq-network-access}}
 
==Default Network Interface on a Host==
 
Every time OpenShift refers to the "default interface" on a host, it means the network interface associated with the default route. This is the logic that figures it out is here: {{Internal|Default Network Interface on a Host|Default Network Interface on a Host}}
 
==SDN, Overlay Network==
 
{{External|https://docs.openshift.com/container-platform/latest/architecture/additional_concepts/sdn.html#architecture-additional-concepts-sdn}}
{{External|https://docs.openshift.com/container-platform/latest/install_config/configuring_sdn.html}}
 
<span id="Overlay_Network"></span><span id="SDN"></span>
All hosts in the OpenShift environment are [[#OpenShift_Cluster|clustered]] and are also members of the ''overlay network'' based on a Software Defined Network (SDN). Each [[#Pod|pod]] gets its [[#Pod_IP_Address|own IP address]], by default from the 10.128.0.0/14 subnet, that is routable from any member of the SDN network. Giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration and migration.
 
However, it is not recommended that a pod talks to another directly by using the IP address. Instead, they should use [[#Service|services]] as an indirection layer, and interact with the service, that may be deployed on different pods at different times. Each [[#Service|service]] also gets its own [[#Service_IP_Address|own IP address]] from the 172.30.0.0/16 subnet.
 
===The Cluster Network===
 
The ''cluster network'' is the network from which pod IPs are assigned. These network blocks should be private and must not conflict with existing network blocks in your infrastructure that pods, nodes or the master may require access to. The default subnet value is 10.128.0.0/14 (10.128.0.0 - 10.131.255.255) and it cannot be arbitrarily reconfigured after deployment. The size and address range of the cluster network, as well as the host subnet size are configurable during installation. Configured with 'osm_cluster_network_cid' at [[OpenShift_3.5_Installation#Configure_Ansible_Inventory_File|installation]].
 
Samples of pod addresses can be obtained by querying [[OpenShift_Service_Concepts#Service_Endpoints|service endpoints]] in a project:
 
oc get endpoints
NAME          ENDPOINTS          AGE
some-service  10.130.1.17:8080  21m
 
The master maintains a registry of nodes in etcd, and each [[#Node|node]] gets allocated upon creation an unused subnet from the cluster network. Each node gets a /23 subnet, which means the cluster can allocate 512 subnets to nodes, and each node has 510 addresses to assign to containers running on it. Once the node is registered with the master, and gets its cluster network subnet, SDN creates on the node three devices:
* <span id='br0'></span>'''br0''' - the OVS bridge device that pod containers will be attached to. The bridge is configured with a set of non-subnet specific flow rules.
* '''tun0''' - an OVS internal port (port 2 on br0). This gets assigned the cluster subnet gateway address, and it is used for external access. The SDN configures net filter and routing rules to enable access from the cluster subnet to the external network via NAT.
* <span id='vxlan'></span>'''vxlan_sys_4789'''. This is the OVS VXLAN device (port 1 on br0) which provides access to containers on remote nodes. It is referred to as "vxlan0".
 
Each time a pod is started, the SDN assigns the pod a free IP address from the node's cluster subnet, attaches the host side of the pod's veth interface pair to the OVS bridge br0, adds OpenFlow rules to the OVS database to route traffic addressed to the new pod to the correct OVS port. If a [[OpenShift_Network_Plugins#multitenant|ovs-multitenant]] plug-in is active, it also adds OpenFlow rules to tag traffic coming from the pod with the pod's VIND, and to allow traffic into the pod if the traffic's VIND matches the pod's VIND, or it is the privileged VIND 0.
 
Nodes update their OpenFlow rules in response to master's notification in case new nodes are added or leave.  When a new subnet is added, the node adds rules on br0 so that packets with a destination IP address the remote subnet go to vxlan0 (port 1 on br0) and thus out onto the network. The [[OpenShift_Network_Plugins#subnet|ovs-subnet]] plug-in sends all packets across the VXLAN with VNID 0, but the [[OpenShift_Network_Plugins#multitenant|ovs-multitenant]] plug-in uses the appropriate VNID for the source container.
 
The SDN does not allow the master host (which is running the [[Open_vSwitch|OVS processes]]) to access the cluster network, so a master does not have access to pods via the cluster network, unless it is also running a node.
 
When [[OpenShift_Network_Plugins#multitenant|ovs-multitenant]] plug-in is active, the master also allocates VXLAN [[OpenShift_Network_Plugins#Virtual_Network_ID_.28VNID.29|VNIDs]] to projects. VNIDs are used to isolate the traffic.
 
====Packet Flow====
 
<font color=red>TODO: https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/sdn.html#sdn-packet-flow</font>
 
====Pod IP Address====
 
A [[#The_Cluster_Network|cluster network]] IP address, that gets assigned to a [[#Pod|pod]].
 
===The Services Subnet===
 
OpenShift uses a "services subnet", also known as "kubernetes services network", in which [[OpenShift Service Concepts#Overview|OpenShift Services]] will be created within the [[#SDN|SDN]]. This network block should be private and must not conflict with any existing network blocks in the infrastructure to which pods, nodes, or the master may require access to. It defaults to 172.30.0.0/16. It cannot be re-configured after deployment. If changed from the default, it must not be 172.16.0.0/16, which the docker0 network bridge uses by default, unless the docker0 network is also modified. It is configured with 'openshift_master_portal_net', 'openshift_portal_net' at [[OpenShift_3.5_Installation#Configure_Ansible_Inventory_File|installation]] and populates the master configuration file [[Master-config.yml#servicesSubnet| servicesSubnet]] element. Note that Docker expects its [[Docker_Server_Configuration#--insecure-registry|insecure registry]] to available on this subnet.
 
The services subnet IP address of a specific service can be displayed with:
 
[[Oc_describe#service|oc describe service]] <''service-name''>
 
or
[[#Service_IP_Address|oc get svc]] <''service-name''>
 
====The Service IP Address====
 
An IP address from the [[#The_Services_Subnet|services subnet]] that is associated with a service. <span id='Service_IP_Address'></span>The service IP address is reported as "Cluster IP" by:
 
[[Oc_get#services.2C_svc|oc get svc]] <''service-name''>
 
This makes sense if we think about a service as a cluster of pods that provide the service.
 
Example:
 
oc get svc logging-es
NAME        CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
logging-es  172.30.254.155  <none>        9200/TCP  31d
 
===Docker Bridge Subnet===
 
Default 172.17.0.0/16.
 
===Network Plugin===
 
{{Internal|OpenShift Network Plugins|OpenShift Network Plugins}}
 
==Open vSwitch==
 
{{Internal|Open vSwitch|Open vSwitch}}
 
==DNS==
 
[[Image:OpenShiftDNSConcepts.png]]
 
===External DNS Server===
 
An external DNS server is required to resolve the public wildcard name of the environment - and as consequence, all public names of various application access points - to the public address of the [[#Default_Router|default router]], as per [[#Networking_Workflow|networking workflow]]. If more than one router is deployed, the external DNS server should resolve the public wildcard name of the environment to the public IP address of the load balancer. For more details about initial configuration details see [[OpenShift_3.5_Installation#External_DNS_Setup|External DNS Setup]].
 
===Optional Support DNS Server===
 
An optional support DNS server can be setup to translate local hostnames (such as master1.ocp36.local) to internal IP addresses allocated to the hosts running OpenShift infrastructure. There is no good reason to make these addresses and names publicly accessible. The OpenShift advanced installation procedure factors it in automatically, as long as the base host images OpenShift is installed in top of is already configured to use it to resolve DNS names via [[/etc/resolv.conf]] or [[NetworkManager]]. No additional configuration via [[OpenShift_hosts#openshift_dns_ip|openshift_dns_ip]] is necessary.
 
===Internal DNS Server===
 
A DNS server used to resolve local resources. The most common use is to translate [[#Service|service]] names to service addresses. The internal DNS server listens on the [[#The_Service_IP_Address|service IP address]] 172.30.0.1:53. It is a SkyDNS instance built into OpenShift. It is deployed on the master and answers the queries for services. The naming queries issued from inside containers/pods are directed to the internal DNS service by Dnsmasq instances running on each node. The process is described [[#Dnsmqsq|below]].
 
The internal DNS server will answer queries on the ".cluster.local" domain that have the following form:
* <''namespace''>.cluster.local
* <''service''>.<''namespace''>.svc.cluster.local - service queries
* <''name''>.<''namespace''>.endpoints.cluster.local - endpoint queries
 
Service DNS can still be used and responds with multiple A records, one for each pod of the service, allowing the client to round-robin between each pod.
 
===Dnsmasq===
 
Dnsmasq is deployed on all nodes, including masters, as part of the installation process, and it works as a DNS proxy. It binds on the [[OpenShift_Concepts#Default_Network_Interface_on_a_Host|default interface]], which is not necessarily the interface servicing the physical OpenShift subnet, and resolves all DNS requests. All OpenShift "internal" names - service names, for example - and all unqualified names are assumed to be in the "''namespace''.svc.cluster.local", "svc.cluster.local", "cluster.local" and "local" domains and forwarded to the [[#Internal_DNS_Server|internal DNS server]]. This is done by Dnsmasq, as configured from /etc/dnsmasq.d/origin-dns.conf:
 
server=/cluster.local/172.30.0.1
 
This configuration tells Dnsmasq to forward all queries for names in the "cluster.local" domain and sub-domains to the [[#Internal_DNS_Server|internal DNS server]] listening on 172.30.0.1.
 
This is an example of such query (where 192.168.122.17 is the IP address of the local node on which Dnsmasq binds, and 172.30.254.155 is a service IP address) :
 
nslookup logging-es.logging.svc.cluster.local
Server: 192.168.122.17
Address: 192.168.122.17#53
Name: logging-es.logging.svc.cluster.local
Address: 172.30.254.155
 
The non-OpenShift name requests are forwarded to the real DNS server stored in /etc/dnsmasq.d/origin-upstream-dns.conf:
 
server=192.168.122.12
 
The configuration file /etc/dnsmasq.d/origin-dns.conf is deployed at installation, while /etc/dnsmasq.d/origin-upstream-dns.conf is created dynamically every time the external DNS changes, as it is the case for example when the interface receives a new DHCP IP address.
 
For more details, see {{Internal|Dnsmasq|Dnsmasq}}


{{Internal|Kubernetes Concepts#Label|Kubernetes Label}}
===Container /etc/resolv.conf===


=Selector=
The container /etc/resolv.conf is created while the container is being assembled, based on information available in [[Node-config.yml|node-config.yamll]]. Also see:
* [[Node-config.yml#dnsIP|node-config.yml dnsIP]]


{{Internal|Kubernetes Concepts#Selector|Kubernetes Selector}}
===External OpenShift DNS Resources===


=Replication Controller=
* https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/networking.html#architecture-additional-concepts-openshift-dns


{{Internal|Kubernetes Concepts#Replication_Controller|Kubernetes Replication Controller}}
==Routing Layer==


=Networking=
OpenShift networking provides a platform-wide ''routing layer'' that directs outside traffic to the correct pods IPs and ports. The routing layer cooperates with the [[#Service_Layer|service layer]]. Routing layer components run themselves in pods. They provide automated load balancing to pods, and routing around unhealthy pods. The routing layer is pluggable and extensible. For more details about the overall OpenShift workflow, see "[[#OpenShift_Workflow|OpenShift Workflow]]".


==Router==
==Router==


The router component routes external requests to applications inside the OpenShift environment. The router is the ingress point for all traffic destined for OpenShift services. Runs in a container.
{{External3|https://docs.openshift.com/container-platform/3.5/install_config/router/index.html|https://docs.openshift.com/container-platform/3.5/architecture/core_concepts/routes.html|https://docs.openshift.com/container-platform/latest/install_config/router/index.html}}
 
The router service routes external requests to applications inside the OpenShift environment. The router service is deployed as one or more pods. The pods associated with the router service are deployed as infrastructure containers on [[#Infrastructure_Node|infrastructure nodes]]. A router is usually created during the installation process. Additional routers can be created with [[oadm router]].
 
The router is the ingress point for all traffic flowing to backing pods implementing OpenShift [[#Service|Services]]. Routers run in containers, which may be deployed on any node (or nodes) in an OpenShift environment, as part of the [[#Default_Project|"default" project]]. A router works by resolving fully qualified DNS external names ("kibana.apps.openshift.novaordis.io") external requests are associated with into pod IP addresses and proxying the requests directly into the pods that back the corresponding [[#Service|service]]. The router gets pod IPs from the [[#Service|service]], and proxies the requests to pods directly, not through the [[#Service|Service]].
 
Routers directly attach to port 80 and 443 on all interfaces on a host, so deployment of the corresponding containers should be restricted to the infrastructure hosts where those points are available. Statistics are usually available on port 1936.
 
===Relationship between Service and Router===
 
When the default router is used, the [[#Service_Layer|service layer]] is bypassed. The [[#Service|Service]] is used only to find which pods the service represents. The default router does the load balancing by proxying directly to the pod endpoints.
 
[[Image:OpenShiftRouterConcepts.png]]
 
===Sticky Sessions===
 
The router configuration determines the sticky session implementation. The default HAProxy template implements sticky session using ''balance source'' directive.
 
===Router Implementations===
 
====HAProxy Default Router====
 
{{External|https://docs.openshift.com/container-platform/3.5/architecture/core_concepts/routes.html#haproxy-template-router}}
 
By default, the router process is an instance of [[HAProxy]], referred to as the <span id="Default_Router"></span>"Default Router". It uses "openshift3/OpenShift Container Platform-haproxy-router" image, and it supports [[#unsecured_route|unsecured]], [[#edge_secured_route|edge-terminated]], [[#re-encryption_terminated_secured_route|re-encryption terminated]] and [[#passthrough_secured_route|passthrough terminated routes]] matching on HTTP host and request path.
 
====F5 Router====
 
The F5 Router integrates with existing F5 Big-IP systems.
 
===Router Operations===
 
Information about the existing routers can be obtained with:
 
  [[Oadm_router#Information|oadm router]] -o yaml
 
More details on router operations are available here: {{Internal|OpenShift Router Operations|Router Operations}}


==Route==
==Route==


A route is a way to expose a service by giving it an externally reachable hostname. A route is a mapping of an FQDN and path to the endpoints of a service. Each route consist of a route name, a service selector and an optional security configuration.
Logically, a ''route'' is an external DNS entry, either in a top level domain or a dynamically allocated name, that is created to point to an OpenShift [[#Service|service]] so that the service can be accessed outside the cluster. The administrator may configure one or more [[#Router|routers]] to handle those routes, typically through an Apache or HAProxy load balancer/proxy.
 
The route object describes a service to expose by giving it an externally reachable DNS host name.  


=Projects=
A route is a mapping of an FQDN and path to [[#Service_Endpoint|the endpoints of a service]]. Each route consist of a route name, a service selector and an optional security configuration:


Projects allows groups of users to work together, define ownership of resources and manage resources. The project restricts and tracks use of resources with quotas and limits. A project is a [[Kubernetes Concepts#Namespace|Kubernetes namespace]] with additional annotations.
NAME            HOST/PORT                            PATH      SERVICES        PORT      TERMINATION          WILDCARD
logging-kibana  kibana.apps.openshift.novaordis.io            logging-kibana  <all>    reencrypt/Redirect  None


{{Internal|Kubernetes Concepts#Namespace|Kubernetes Namespaces}}
A route can be <span id="unsecured_route"></span>''unsecured'' or  ''secured''.


A project lets a community of users to organize and manage their content in isolation from other communities.
The secure routes specify the TLS termination of the route, which relies on SNI (Server Name Indication). More specifically, they can be:
* <span id="edge_secured_route"></span>edge secured (or edge terminated). Occurs at the router, prior to proxying the traffic to the destination. The  front end of the router serves the TLS certificates, so they must be configured in the route. If the certificates are not configured in the route, the default certificate of the router is used. Connection from the router into the internal network are not encrypted. This is an [[OpenShift_Route_Definition#Secured_Edge-Terminated_Route|example of edge-termintated route]].
* <span id="passthrough_secured_route"></span>passthrough secured (or passthrough terminated). The encrypted traffic is send to to destination, the router does not provide TLS termination, hence no keys or certificates are required on the router. The destination will handle certificates. This method supports client certificates. This is an [[OpenShift_Route_Definition#Passthrough-Terminated_Route|example of passthrough-termintated route]].
* <span id="re-encryption_terminated_secured_route"></span>re-encryption terminated secured. This is an [[OpenShift_Route_Definition#Re-encryption-Terminated_Route|example of re-encyrption-termintated route]].


Each project has its own set of:
{{Internal|OpenShift Route Definition|Route Definition}}


==Objects==
Routes can be displayed with the following commands:
[[oc get#all|oc get all]]
[[oc get#routes|oc get routes]]


The objects include [[#Pod|pods]], [[#Service|services]], [[#Replication_Controller|replication controllers]], etc.
Routes can be created with API calls, an JSON or [[OpenShift Route Definition|YAML object definition file]] or the [[oc expose#service|oc expose service]] command.


==User Accounts==
===Path-Based Route===


Are created upon login or via the [[#API|API]] and represented with the <tt>User</tt> object.
A ''path-based route'' specifies a path component, allowing the same host to server multiple routes.


==System Users==
===Route with Hostname===


These users are created automatically when the infrastructure is defined, for the purpose of enabling the infrastructure to interact with the API securely. The cluster administrator has access to everything.
Routes let you associate a service with an externally reachable hostname. If the hostname is not provided, OpenShift will generate on based on the <routename>-.<namespace>].<suffix> pattern.


==Service Accounts==
===Default Routing Subdomain===


Non-human users that act automatically with designated access to objects in their projects or other projects.
The suffix and the default routing subdomain can be configured in the [[Master-config.yml#subdomain|master configuration file]].


==Group==
===Route Operations===


==Policies==
{{Internal|OpenShift Route Operations|Route Operations}}


{{Internal|Kubernetes Concepts#Policies|Policies}}
==Networking Workflow==


==Constraints==
The networking workflow is implemented at the [[#Networking_Layer|networking layer]].


Constraints are quotas for each kind of [[#Object|object]] that can be limited.
* A user requests a page by pointing the browser to http://myapp.mydomain.com
* The [[#External_DNS_Server|external DNS server]] resolves that request to the IP address of one of the hosts that host the [[#Default_Router|default router]]. That usually requires a wildcard C name in the DNS server pointed to the node that hosts the router container.
* Default Router selects the pod from the list of pods listed by the [[#Service|service]] and acts as a proxy for that pod (this is where the port can be translated from the external 443 to the internal 8443, for example).


==Resources==
=Resources=


* cpu, requests.cpu
* cpu, requests.cpu
Line 135: Line 625:
* resourcequotas
* resourcequotas
* services
* services
* secrets
* [[OpenShift_Security_Concepts#Secret|Secrets]]
* configmaps
* [[OpenShift_Volume_Concepts#ConfigMap|configmaps]]
* persistentvolumeclaims
* persistentvolumeclaims
* openshif.io/imagestreams
* openshif.io/imagestreams


==Resource Quota==
Most resources can be defined in JSON or YAML files, or via an API call. Resources can be exposed via the [[OpenShift_Volume_Concepts#Downward_API|downward API]] to the container.
 
=<span id='Resource_Quotas'></span>Resource Management=
 
{{Internal|OpenShift Resource Management Concepts|Resource Management}}
 
=Project=
 
<span id='Projects'></span>
Projects allows groups of [[OpenShift_Security_Concepts#User|users]] to work together, define ownership of resources and manage resources - they can be seen as the central vehicle for managing regular users' access to resources. The project restricts and tracks use of resources with quotas and limits. A project is a [[Kubernetes Concepts#Namespace|Kubernetes namespace]] with additional annotations. A project can contain any number of containers of any kind, and any grouping or structure can be enforced using labels. The project is the closest concept to an application, OpenShift does not know of applications, though it provides a [[oc new-app|new-app command]]. A project lets a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators—unless they are given permission to create projects, in which case they automatically have access to their own projects.
 
Most objects in the system are scoped by a namespace, with the exception of:
* [[OpenShift_Security_Concepts#User|users]]
* [[OpenShift_Security_Concepts#Group|groups]]
* [[#Node|nodes]]
* [[#Persistent_Volume|persistent volumes]]
* [[#Images|images]]
 
More details about OpenShift types are available here [[oc types|OpenShift types]].
 
Each project has its own set of <span id='Objects'>''objects''</span>: [[#Pod|pods]], [[#Service|services]], [[#Replication_Controller|replication controllers]], etc. The names of each resource are unique within a project. Developers may request projects to be create, but administrators control the resources allocated to the projects.
 
What actions a user can or cannot perform on a project's objects are specified by <span id='Policies'>[[OpenShift_Security_Concepts#Policy|policies]]</span>.
 
<span id='Constraints'>''Constraints''</span> are quotas for each kind of [[#Object|object]] that can be limited.
 
New projects are created with:
 
[[Oc new-project#Overview|oc new-project]]
 
==Current Project==
 
The ''current project'' is a concept that applies to [[Oc#Namespace_Selection|oc]], and specifies the project oc commands apply to, without to explicitly having to use the -n <project-name> qualifier. The current project can be set with [[Oc_project|oc project]] and read with [[oc status]]. The current project is part of the CLI tool's [[OpenShift_Command_Line_Tools#Context|current context]], maintained in user's [[.kube config|.kube/config]].
 
==Global Project==
 
In the context of an [[OpenShift_Network_Plugins#multitenant|ovs-multitenant]] SDN plugin, a project is ''global'' if if can receive [[#The_Cluster_Network|cluster network]] traffic from any pods, belonging to any project, and it can send traffic to any pods and services. <font color=red>Is the [[#Default_Project|default]] project a global project?</font>. A project can be made global with:
 
[[OpenShift_Network_Operations#Pod_Network_Management|oadm pod-network]] make-projects-global <''project-1-name''> <''project-2-name''>
 
==Standard Projects==
 
===Default Project===
 
The "default" project is also referred to as "default namespace". It contains the following pods:
* the [[OpenShift_Concepts#Integrated_Docker_Registry|integrated docker registry]] pod (memory consumption based on a test installation: 280 MB)
* the [[OpenShift_Concepts#The_Registry_Console|registry console]] pod (memory consumption based on a test installation: 34 MB)
* the [[#Router|router]] pod (memory consumption based on a test installation: 140 MB)
 
In case the [[OpenShift_Network_Plugins#multitenant|ovs-multitenant]] SDN plug-in is installed, the "default" project has [[OpenShift_Network_Plugins#Virtual_Network_ID_.28VNID.29|VNID]] 0 and all its pods can send and receive traffic from any other pods.
 
The "default" project can be used to store a new project template, if the default one needs to be modified. See [[OpenShift_Template_Operations#Modify_the_Template_for_New_Projects|Template Operations - Modify the Template for New Projects]].
 
==="logging" Project===
 
If logging support is deployed at installation or later, the participating pods ([[Kibana and OpenShift|kibana]], [[ElasticSearch and OpenShift|ElasticSearch]], [[Fluentd and OpenShift|fluentd]], [[OpenShift Curator|curator]]) are members of the [[OpenShift_Logging_Concepts#The_.22logging.22_Project|"logging" project]].
 
This is the memory consumption based on a test installation:
* kibana pod: 95 MB
* elasticsearch pod: 1.4GB
* curator pod: 10 MB
* fluentd pods max 130 MB
 
==="openshift-infra" Project===


<tt>ResourceQuota</tt> object enumerates hard resource usage limits ''per project''.
Contains the [[OpenShift Metrics Concepts#Overview|metrics components]]:


<tt>ClusterResourceQuota</tt>object enumerates hard resource usage limits across the cluster.
* the [[OpenShift_Hawkular#Cassandra|Hawkular Cassandra]] pod
* Hawkular pod
* Heapster pod


Operations:
==="openshift" Project===
* [[Oc#quota|oc get quota]]
 
* [[Oc#quota_2|oc describe quota]]
Contains standard [[#Template|templates]] and [[#.22openshift.22_Image_Streams|image streams]]. Images to use with OpenShift projects can be installed during the [[OpenShift_3.6_Installation#Load_Default_Image_Streams_and_Templates|OpenShift installation phase]], or they can be added later running command similar to:
oc -n openshift [[OpenShift_Image_and_ImageStream_Operations#Create_an_Individual_Image_Stream_with_import-image|import-image]] jboss-eap64-openshift:1.6
 
===Other Standard Projects===
 
* "kube-system"
* "management-infra"
 
==Projects and Applications==
 
Each project may have multiple applications, and each application can be managed individually.
 
A new application is created with:
 
[[oc new-app]]
 
===The "app" Label===
 
<font color=red>
'''TODO''': app vs. application
 
There is no OpenShift object that represent an applications. However, the pods belonging to a specific application are grouped together in the Web console project window, under the same "Application" logical category. The grouping is done based on the "app" label value: all pods with the same "app" label value are represented as belonging to the same "application". The "app" label value can be explicitly set in the template that is used to instantiate the application with, [[OpenShift_Template_Definition#Labels|in the "labels:" section]]. Some templates [[OpenShift_Build_and_Deploy_a_JEE_Application_with_S2I#Create_the_Application|expose this as parameter]], so it can be set on command line with a syntax similar to:
--param APPLICATION_NAME=<''application-name''>
</font>
 
===Application Operations===
 
{{Internal|OpenShift_Application_Operations#Overview|Application Operations}}
 
==Project Operations==
 
{{Internal|OpenShift Project Operations|Project Operations}}


=Template=
=Template=


A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation in OpenShift.
{{External|https://docs.openshift.com/container-platform/latest/dev_guide/templates.html}}
 
{{External|https://docs.openshift.org/latest/dev_guide/templates.html}}
 
A template is a resource that describes a set of objects that can be parameterized and processed, so they can be created at once. The template can be processed to create anything within a project, provided that permissions exist. The template may define a set of labels to apply to every object defined in the template. A template can use preset variables or randomized values, like passwords.
 
Templates can be stored in, and processed from files, or they can be exposed via the OpenShift API and stored as part of the project. Users can define their own templates within their projects.
 
The objects define in a template collectively define a ''desired state''. OpenShift's responsibility is to make sure that the current state matches the desired state.
 
{{Internal|OpenShift Template Definition|Template Definition}}
 
Specifying parameters in a template allows all objects instantiated by the template to see consistent values for these parameters when the template is processed. The parameters can be specified explicitly, or generated automatically.
 
The configuration can be generated from template with [[oc process]] command.
 
Most templates use pre-built S2I builder images, that includes the programming language runtime and its dependencies. These builder images can also be used by themselves, without the corresponding template, for simple use cases.
 
==New Project Template==
 
The master provisions projects based on the template that is identified by the [[Master-config.yml#projectRequestTemplate|"projectRequestTemplate" in master-config.yaml file]]. If nothing is specified there, new projects will be created based on a [[OpenShift Default New Project Template|built-in new project template]] that can be obtained with:
 
oadm create-bootstrap-project-template -o yaml
 
==New Application Template==
 
[[oc new-app]] ''<template-name>''


=CICD Support=
==Template Libraries==


OpenShift enables [[Software Development#DevOps|DevOps]]. It has built-in support for the Jenkins CI server, providing a native way of doing [[Software Development#CICD|CICD]].
* Global template library - <font color=red>is this the "openshift" project?</font>
* Project template library. A project template can be based on a JSON file and uploaded with [[Oc_create#Uploads_a_Template_to_the_Project_Template_Library|oc create]] to the project library. A template stored in the library can be instantiate with [[oc new-app]] command.
* https://github.com/openshift/library
* https://github.com/jboss-openshift/application-templates


OpenShift can integrate with Git-based source code repositories. When a developer pushes code to the GIt repository, Jenkins pulls the code and performs a project build. After the build is complete, Jenkins invokes the OpenShift master node API to initiate a [[#Source-to-Image_.28S2I.29|"source-to-image" (S2I) build]]. As part of the SI process, the latest project artifact is downloaded from Jenkins and included in the image that runs in the OpenShift node.
==Template Operations==


=Source-to-Image (S2I)=
{{Internal|OpenShift Template Operations|Template Operations}}


=Build=
=Build=


==Build Config==
{{External|https://docs.openshift.com/container-platform/latest/architecture/core_concepts/builds_and_image_streams.html}}
 
A ''build'' is the process of transforming input parameters into a resulting object. In most cases, that means transforming source code, other images, Dockerfiles, etc. into a ''runnable image''. A build is run inside of a  [[Docker_Concepts#Privileged_Container|privileged container]] and has the same restrictions normal pods have. A build usually results in an image pushed to a Docker registry, subject to post-build tests. Builds are run with the builder [[OpenShift_Security_Concepts#Service_Account|service account]], which must have access to any secrets it needs, such as repository credentials - in addition to the builder service account having to have access to the build secrets, the [[#Build_Configuration|build configuration]] must [[OpenShift_Concepts#Build_Configuration_and_Build_Secrets|contain the required build secret]]. Builds can be [[#Build_Trigger|triggered]] manually or automatically.


A <tt>BuildConfig</tt> object is the definition of an entire build.
OpenShift supports several types of builds. The type of build is also referred to as the [[#Build_Strategy|build strategy]]:
* [[#Docker_Build|Docker build]]
* [[#Source-to-Image_.28S2I.29_Build|Source-to-Image (S2I or source) build]]
* [[#Pipeline_Build|Pipeline build]]
* [[#Custom_Build|Custom build]]


OpenShift creates Docker containers from build images and pushes the to an integrated registry.
Builds for a project can be reviewed by navigating with the web console to the project -> Builds -> Builds or invoking [[Oc_get#build.2C_builds|oc get builds]] from command line. Note these are actual executed builds, not build configurations.


==Build Strategy==
==Build Strategy==
Line 174: Line 795:
===Docker Build===
===Docker Build===


The Docker build expects repository with [[Dockerfile]] and all artifacts required to produce runnable image. It invokes the [[docker build]] command, and the result is a runnable image.
Docker can be used to build [[Docker Concepts#Container_Image|images]] directly. The Docker build expects a repository with [[Dockerfile]] and all artifacts required to produce runnable image. It invokes the [[docker build]] command, and the result is a runnable image.
 
{{Internal|OpenShift_Build_Operations#Docker_Build|Create a Docker Build}}


===Source-to-Image (S2I) Build===
===Source-to-Image (S2I) Build===


It is a tool for building reproducible Docker images, which produces runnable images by injecting application source into a Docker image and assembling a new Docker image. The new image incorporates the base image and the built source and it is ready to use with the [[docker run]] command. S2I supports incremental builds, which reuse previously downloaded dependencies, previously built artifacts, etc.
{{External|https://docs.openshift.com/container-platform/latest/architecture/core_concepts/builds_and_image_streams.html#source-build}}
 
{{External|https://docs.openshift.com/container-platform/latest/creating_images/s2i.html#creating-images-s2i}}
 
A ''source to image build strategy'' is a process that takes a base image, called the [[#Builder_Image|builder image]], application source code and [[#S2I_Scripts|S2I scripts]], compiles the source and injects the build artifacts into the builder image, producing a ready-to-run new Docker image, which is the end product of the build. The image is then pushed into the [[#Integrated_Docker_Registry|integrated Docker registry]].
 
The essential characteristic of the source build is that the [[#Builder_Image|builder image]] provides both the build environment and the runtime image in which the build artifact is supposed to run in. The build logic is encapsulated within the [[#S2I_Scripts|S2I scripts]]. The S2I scripts usually come with builder image, but they can also be overridden by scripts placed in the source code or in different location, specified in the build configuration. OpenShift supports a wide variety of languages and base images: Java, PHP, Ruby, Python, Node.js. [[#Incremental_Build|Incremental builds]] are supported. With a source build, the assemble process performs a large number of complex operations without creating a new layer at each step, resulting in a compact final image.  
 
The source strategy is specified in the [[#Build_Configuration|build configuration]] as follows:
 
kind: BuildConfig
spec:
  '''strategy''':
      '''type''': <font color=teal>'''Source'''</font>
      <font color=teal>'''sourceStrategy'''</font>:
        '''from''':
          kind: ImageStreamTag
          name: jboss-eap70-openshift:1.5
          namespace: openshift
  '''source''':
    '''type''': Git
    '''git''':
      uri: ...
  '''output''':
    '''to''':
      kind: '''ImageStreamTag'''
      name: <''app-image-repository-name''>:latest
The [[#Builder_Image|builder image]] is specified with a 'spec.strategy.sourceStrategy.from' element. Source code origin is specified with 'spec.strategy.source' element. Assuming that the build is successful, the resulted image is pushed into the <font color=red>?</font> image stream and tagged with the 'output' image stream tag.
 
A working S2I Build Example is available here: {{Internal|OpenShift Build and Deploy a JEE Application with S2I|OpenShift Build and Deploy a JEE Application with S2I}}
 
An example to create a source build configuration from scratch is available here: {{Internal|OpenShift_Build_Operations#Source_Build|Create a Source Build}}
 
====The Build Process====
 
For a source build, the build process takes place in the [[#Builder_Image|builder image]] container. It consists in the following steps:
* Download the [[#S2I_Scripts|S2I scripts]].
* If it is an [[#Incremental_Build|incremental build]], save the previous build's artifacts with [[#save-artifacts|save-artifacts]].
* A work directory is created.
* Pull the source code from the repository into the work directory.
* Heuristics is applied to detect how to build the code.
* Create a TAR that contains  [[#S2I_Scripts|S2I scripts]] and source code.
* Untar [[#S2I_Scripts|S2I scripts]], sources and artifacts.
* Invoke the [[#assemble|assemble]] script.
* The build process changes to 'contextDir' - anything that resides outside 'contextDir' is ignored by the build.
* Run build.
* Push image to docker-registry.default.svc:5000/<''project-name''>/...
 
====Builder Image====
 
The builder image is an image that must contain both compile time dependencies and build tools, because the build process take place inside it, and runtime dependencies and the application runtime, because the image will be used to create containers that execute the application. The builder image must also contain the tar archiving utility, available in $PATH, which is used during the build process to extract source code and S2I scripts, and the /bin/sh command line interpreter. A builder image should be able to generate some usage information by running the image with [[docker run]]. An article that shows how to create builder images is available here: https://blog.openshift.com/create-s2i-builder-image/.
 
====Extended Build====
 
An extended build uses a [[#Builder_Image|builder image]] and a ''runtime image'' as two separate images. In this case the builder image contains the build tooling but not the application runtime. The runtime image contains the application runtime. This is useful when we don't want the build tooling laying around in the runtime image.
 
====S2I Scripts====
 
The S2I scripts encapsulate the build logic and must be executable inside the [[#Builder_Image|builder image]]. They must be provided as an input of the source build and play an essential role in the [[#The_Build_Process|build process]]. They come from one of the following locations, listed below in the inverse order of their precedence (if the same script is available in more than one location, the script from the location listed last in the list is used):
 
* Bundled with the [[#Builder_Image|builder image]], in a location specified as "io.openshift.s2i.scripts-url" label. A common value is "image:///usr/local/s2i". As an example, these are the S2I scripts that come with an EAP7 builder image: [https://github.com/NovaOrdis/playground/blob/master/openshift/sample-s2i-scripts/eap7-image/assemble assemble], [https://github.com/NovaOrdis/playground/blob/master/openshift/sample-s2i-scripts/eap7-image/run run] and [https://github.com/NovaOrdis/playground/blob/master/openshift/sample-s2i-scripts/eap7-image/save-artifacts save-artifacts].
* Bundled with the source code in an .s2i/bin directory of the application source repository.
* Published at an URL that is specified in the [[OpenShift_Build_Configuration_Definition#scripts|build configuration definition]].
 
Both the "io.openshift.s2i.scripts-url" label value specified in the image and the script specified in the [[OpenShift_Build_Configuration_Definition#scripts|build configuration]] can take one of the following forms:
* image:///<''path-to-script-directory''> - the absolute path inside the image.
* file:///<''path-to-script-directory''> - relative or absolute path to a directory on the host where the scripts are located.
* http(s)://<''path-to-script-directory''> - URL to a directory where the scripts are located.
 
The scripts are:
 
=====assemble=====
 
This is a required script. It builds the application artifacts from source and places them into appropriate directories inside the image. The script's main responsibility is to turn source code into a runnable application. It can also be used to inject configuration into the system.
 
It should execute the following sequence:
* Restore build artifacts, if the build is incremental. In this case [[#save-artifacts|save-artifacts]] must be also defined.
* Place the application source in the build location.
* Build.
* Install the artifacts into locations that are appropriate for them to run.
 
EAP7 builder image example: [https://github.com/NovaOrdis/playground/blob/master/openshift/sample-s2i-scripts/eap7-image/assemble assemble].
 
=====run=====
 
This is a required script. It is invoked when the container is instantiated to execute the application.
 
EAP7 builder image example: [https://github.com/NovaOrdis/playground/blob/master/openshift/sample-s2i-scripts/eap7-image/run run].
 
=====save-artifacts=====
 
This is an optional script, needed when incremental builds are enabled. It gathers all dependencies that can speed up the build process, from the build image that has just completed the successful build (the Maven .m2 directory, for example). The dependencies are assembled into a tar file and streamed to standard output.
 
EAP7 builder image example: [https://github.com/NovaOrdis/playground/blob/master/openshift/sample-s2i-scripts/eap7-image/save-artifacts save-artifacts].
 
=====usage=====
 
This is an optional script. Informs the user how to properly use the image.
 
=====test/run=====
 
This is an optional script. Allows to create a simple process to check whether the image is working correctly. For more details see:
 
{{External|https://docs.openshift.com/container-platform/latest/creating_images/s2i_testing.html#creating-images-s2i-testing}}
 
====Incremental Build====
 
The source build may be configured as an ''incremental build'', which re-uses the previously downloaded dependencies and previously built artifacts, in order to speed up the build. Incremental builds reuse previously downloaded dependencies, previously built artifacts, etc.
 
strategy
  type: "Source"
  sourceStrategy:
    from:
        ...
    <font color=teal>'''incremental'''</font>: true
 
====Webhooks====
 
A source build can be configured to be automatically trigged when a new event - most commonly a push - is detected by the source repository.  When the repository identifies a push, or any other kind of event it was configured for, it makes a HTTP invocation into an OpenShift URL. For internal repositories that run within the OpenShift cluster, such as [[OpenShift Gogs|Gogs]], the URL is https&#58;//openshift.default.svc.cluster.local/oapi/v1/namespaces/<''project-name''>/buildconfigs/<''build-configuration-name''>/webhooks/<''generic-secret-value''>/generic. The secret value can be [[OpenShift_Build_Configuration_Definition#Enable_a_Generic_Webhook_Trigger|manually configured in the build configuration]], as shown here, but it is usually set to a randomly generated value when the build configuration is created. For an external URL, such as GitHub, the URL must be publicly accessible '''<font color=red>TODO</font>'''. Once the build is triggered, it proceed as it would otherwise proceed if it was triggered by other means. Note that no build server is required for this mechanism to work, though a build server such as [[OpenShift_CI/CD_Concepts#Overview|Jenkins]] can be integrated and configured to drive a [[#Pipeline_Build|pipeline build]].
 
The detailed procedure to configure a webhook trigger is available here:
 
{{Internal|OpenShift_Build_Operations#Configure_a_Webhook_Trigger_for_a_Source_Build|Configure a Webhook Trigger for a Source Build}}
 
====Tagging the Build Artifact====
 
Once the build process (the [[#assemble|assemble]] script) completes successfully, the build runtime sets the image's command to the [[#run|run]] script, and tags the image with the [[OpenShift Build Configuration Definition#output|output name]] specified in the build configuration.
 
<font color=red>'''TODO'''</font>. How to tag the output image with a dynamic tag that is generated from the information in the source code itself?
 
==== MAVEN_MIRROR_URL====
 
<font color=red>'''TODO'''</font> 'MAVEN_MIRROR_URL' is an environment variable interpreted by the s2i builder, which use the Maven repository whose URL is specified as a source of artifacts. For more details see: {{Internal|OpenShift_Nexus#MAVEN_MIRROR_URL|OpenShift Nexus}}
 
===Pipeline Build===
 
{{External|https://docs.openshift.com/container-platform/latest/architecture/core_concepts/builds_and_image_streams.html#pipeline-build}}
 
{{External|https://docs.openshift.com/container-platform/latest/install_config/configuring_pipeline_execution.html#install-config-configuring-pipeline-execution}}
 
{{External|https://docs.openshift.com/container-platform/latest/dev_guide/dev_tutorials/openshift_pipeline.html}}
 
The ''pipeline build strategy'' allows the developers to defined the build logic externally, as a [[Jenkins_Concepts#Pipeline|Jenkins Pipeline]]. The logic is executed by an [[OpenShift_CI/CD_Concepts#Jenkins_Integration|OpenShift-integrated Jenkins instance]], which uses a series of specialized plug-ins to work with OpenShift. More details about how OpenShift and Jenkins interact are available in the [[OpenShift_CI/CD_Concepts#Overview|OpenShift CI/CD Concepts]] page. The specification of the build logic can be embedded directly into an OpenShift build configuration object, as shown below, or it can be specified externally into a [[OpenShift_CI/CD_Concepts#Jenkinsfile|Jenkinsfile]] which is then later automatically integrated with OpenShift. The build can then be started, monitored, and managed by OpenShift in the same way as any other build type, but it can at the same time be managed from the Jenkins UI - these two representations are kept in sync. The pipeline's graphical representation is available both in the integrated Jenkins instance and it OpenShift directly, as a "Pipeline":
 
[[Image:OpenShiftPipeline.png]]
 
The pipeline build strategy is specified in the [[#Build_Configuration|build configuration]] as follows:
 
kind: BuildConfig
spec:
  '''strategy''':
      '''type''': <font color=teal>'''JenkinsPipeline'''</font>
      <font color=teal>'''jenkinsPipelineStrategy'''</font>:
        <font color=teal>'''jenkinsfile'''</font>: |-
            node('mvn') {
                //
                // Groovy script that defines the pipeline
                //
            }
  '''source''':
    '''type''': none
  '''output''': {}
 
This is an example of specifying the build logic inline in the build configuration. It is also possible to specify the build logic externally in a [[OpenShift_CI/CD_Concepts#Jenkinsfile|Jenkinsfile]]. Note that unlike in the [[#Source-to-Image_.28S2I.29_Build|source build]]'s case, no "output" are specified, because this is fully defined in the Jenkins Groovy script. "source" may be used to specify a source repository URL that contains the [[OpenShift_CI/CD_Concepts#Jenkinsfile|Jenkinsfile]] pipeline definition.
 
An example of how to create a pipeline build configuration from scratch is available here: {{Internal|OpenShift_Build_Operations#Pipeline_Build|Create a Pipeline Build}}
 
====<span id='CICD_Support'></span><span id='CI/CD_Support'></span>CI/CD====
 
OpenShift enables [[Software Development#DevOps|DevOps]]. It has built-in support for the [[OpenShift CI/CD Concepts#Overview|Jenkins CI server]], providing a native way of doing [[Continuous Integration#Overview|Continuous Integration]] and [[Continuous Delivery#Overview|Continuous Delivery]].
 
{{Internal|OpenShift CI/CD Concepts|OpenShift CI/CD Concepts}}
 
{{Internal|OpenShift CI/CD Operations|OpenShift CI/CD Operations}}


===Custom Build===
===Custom Build===


===Pipeline Build===
The custom build strategy allows to define a specific builder image responsible for the entire build process, which allows you to customize the build process.
 
The ''custom builder image'' is a plain Docker image embedded with build process logic.
 
===Chained Builds===
 
Two builds may be chained: one produces binaries from a builder image and source, and pushes the artifacts into an image stream; the other pulls the binaries produces by the first build from the image stream and places them into a runtime image, based on a Dockerfile, A separated image is thus created.
 
<span id='Build_Config'></span>
 
==Build Configuration==
 
A <tt>BuildConfig</tt> object is the definition of an entire build. It contains a description of how to combine source code and a base image to produce a
new image. The <tt>BuildConfig</tt> lists the location of the source code, the [[#Build_Trigger|build triggers]], the [[#Build_Strategy|build strategy]], various other build configuration parameters, such as [[OpenShift_Build_Operations#Set_Maximum_Duration|maximum allowed duration]], and the specification of the output of the build, which is in most cases a Docker container that gets pushed into the integrated registry. Both [[oc new-app]] and [[oc new-build]] create <tt>BuildConfig</tt> objects. The preferred way to create a build configuration is to start with  [[oc new-build]] and customize the resulted build configuration:
 
{{Internal|OpenShift_Build_Operations#Create_a_Build_Configuration|Create a Build Configuration with oc new-build}}
 
{{Internal|OpenShift Build Configuration Definition|Build Configuration Definition}}
 
Generally, immediately after the build configuration is declared, a build starts, and if it successful, a new image will be pushed into the artifact image stream.
 
===Build Trigger===
 
Builds can be triggered by the following events:
 
<font color=red>'''TODO''': when does a build start automatically and when doesn't.</font>
 
====github, generic, gitlab, bitbucket====
 
Source code change signaled by a GitHub webhook or a generic webhook. An internal repository repository must be configured to invoke into an URL similar to https&#58;//openshift.default.svc.cluster.local/oapi/v1/namespaces/$CICD_PROJECT/buildconfigs/<''application-name''>/webhooks/${WEBHOOK_SECRET}/generic. An external repository must be configured to invoke either into Jenkins using a URL similar to <font color=red>?</font> or into the associated build configuration, using an URL similar to <font color=red>?</font>.
 
====<span id='imageChange'></span>ImageChange====
 
{{External|https://docs.openshift.com/container-platform/3.7/dev_guide/deployments/basic_deployment_operations.html#image-change-trigger}}
 
Base image change, in case of [[#Source-to-Image_.28S2I.29_Build|source-to-image]] builds.
 
====Build Configuration Change====
 
{{External|https://docs.openshift.com/container-platform/3.7/dev_guide/deployments/basic_deployment_operations.html#config-change-trigger}}
 
====Manual Build====
 
Manual with [[oc start-build]].
 
===Build Source===
<span id='Build_Sources'></span>
 
A build configuration accepts the following type of sources:
* Git
* Binary (Git and binary are mutually exclusive).  See https://docs.openshift.com/container-platform/latest/dev_guide/builds/build_inputs.html#binary-source.
* Dockerfile
* Image
* [[#Build_Secrets|Input secrets]]
* External artifacts
 
====Build Secrets====
<span id='Build Configuration and Build Secrets'></span>
 
The source element may refer to build [[OpenShift_Security_Concepts#Secret|secrets]]:
 
kind: BuildConfig
...
spec:
  ...
  source:
    ...
    <font color=teal>'''sourceSecret'''</font>:
      name: some-secret
 
===Build Node Selection===
 
Build can be assigned to specific nodes, by specifying labels in the "nodeSelector" field of the build configuration:
 
kind: BuildConfig
...
spec:
  nodeSelector:
    env: app
 
===Build Resources===
 
Can be setup in the build configuration:
 
kind: BuildConfig
<font color='teal'>'''spec'''</font>:
  <font color='teal'>'''resources'''</font>:
    <font color='teal'>'''limits'''</font>:
      cpu: ...
      memory: ...
 
==Build Operations==
 
{{Internal|OpenShift Build Operations|Build Operations}}
 
=Image=
 
{{External|https://docs.openshift.com/container-platform/latest/architecture/core_concepts/containers_and_images.html#docker-images}}
 
An OpenShift ''image'' represents a ''immutable'' [[Docker Concepts#Image|container image in Docker format]]. Images are not project-scoped, but cluster-scoped: any user from any project can get information about any image in the cluster with [[OpenShift_Image_and_ImageStream_Operations#List_Images_available_to_the_Cluster|oc get images]], provided that it has sufficient cluster-level privileges. One role that permits inspecting image information is [[OpenShift_Security_Concepts#.2Fcluster-reader|/cluster-reader]] . On the other hand, a user that only has project-level privileges cannot inspect images.
 
Information about a specific image can be obtained with [[OpenShift_Image_and_ImageStream_Operations#Image_Information|oc get image <''image-name''>]]
 
{{Internal|OpenShift Image Definition|OpenShift Image Definition}}
 
==Image Name==
 
An image is identified by a name, which is the SHA256 hash of the image: "sha256:ea573da7c263e68f2d021c63bec218b79699a0b48e58b3724360de9c6900ca46".  The name can be local to the current cluster - the images produced as artifacts of the cluster's project do not exist anywhere else, until they are explicitly published - or point to a remote Docker registry. An OpenShift cluster usually comes with its own [[#Integrated_Docker_Registry|integrated Docker registry]], accessible to all projects, whose main function is to store images produced by the projects as result of [[#Build|build activities]].
 
==Image Reference==
 
The image contains a "dockerImageReference" entry that maintains the location the image can be found at:
 
docker-registry.default.svc:5000/novaordis-dev/tasks@sha256:41976593d219eb2008a533e7f6fbb17e1fc3391065e2d592cc0b05defe5d5562
 
registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift@sha256:ea573da7c263e68f2d021c63bec218b79699a0b48e58b3724360de9c6900ca46
 
=Image Stream=
 
An ''image stream'' is similar to a [[Docker_Concepts#Image_Repository|Docker image repository]], in that it contains a group of related Docker images identified by [[#Image_Stream_Tag|image stream tags]].
 
The difference in referencing an image stream (or a tag within an image stream) instead of referencing a image registry/namespace/repository:tag directly is that actions can be triggered automatically when the underlying image changes. These actions may include functionality to control when image updates are rolled out.
 
Logically it is analogous to a branch in a source code repository. The image stream presents a single virtual view of related images and allow you to control which images are rolled out to your builds and applications. The stream may contain images from the OpenShift integrated Docker registry, other image streams, other image repositories. OpenShift stores complete metadata about each image, including example command, entry point and environment variables.
 
{{Internal|OpenShift_Image_Stream_Definition#Overview|Image Stream Definition}}
 
Image streams exist as OpenShift objects, describable with <tt>oc get is</tt>, but they also have a representation in the cluster's [[OpenShift_Concepts#Integrated_Docker_Registry|integrated docker repository]] docker-registry.default.svc:5000/, as ''project-name''/''image-stream-name''. As an example, for a "blue-project" and a "red-is":
 
kind: '''ImageStream'''
metadata:
  ...
  '''name''': '''red-is'''
  '''namespace''': '''blue-project'''
spec:
  tags:
  - '''name''': '''latest'''
    '''from''':
      kind: DockerImage
      '''name''': '''registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift'''
 
[[Image:ImageStreamInRegistry.png]]
 
Example of a JSON file that can be used to create image streams in OpenShift: {{External|https://github.com/openshift/origin/blob/master/examples/image-streams/image-streams-rhel7.json}}
 
[[#Build|Builds]] and [[#Deployment|deployments]] can watch an image stream to receive notifications when new images are added and react by performing a build or a deployment.
 
Operations:
 
{{Internal|OpenShift_Image_and_ImageStream_Operations|ImageStream Operations}}
 
==="openshift" Image Streams===
 
A set of standard image streams come pre-configured in the "[[#.22openshift.22_Project|openshift]]" project of an OpenShift installation, but image streams can be [[OpenShift_Image_and_ImageStream_Operations#Create_an_Image_Stream|created in any project]].
 
==Image Stream Lookup Policy==
 
The lookup policy specifies how other resources reference this image within this namespace.
 
Possible values:
* '''local''' - will change the docker short image reference, such as "mysql" or "php:latest" on objects in this namespace to the image ID whenever they match this image stream, instead of reaching out to a remote registry. The name will be fully qualified to an image ID if found. The gag's referencePolicy is taken into account on the replaced value. It only works within the current namespace.
 
==Image Stream Tag==
 
The default tag is called "latest". A tag may point to an external Docker registry, at other tags in the same image stream, a different image stream, or be controlled to directly point to known images. An image stream tag full name is <''image-stream-name''>:<''tag-name''>. For example, the tag "0.11.29" exposed by the "gogs" image stream as shown here:
 
...
kind: ImageStream
metadata:
  '''name''': <font color=teal>gogs</font>
spec:
  tags:
  - '''name''': <font color=teal>"0.11.29"</font>
    ...
 
is referred to by a deployment configuration as:
 
...
triggers:
- type: ImageChange
    imageChangeParams:
      ...
      from:
        kind: ImageStreamTag
        '''name''': <font color=teal>'''gogs:0.11.29'''</font>
 
<font color=red> Images can be pushed to an image stream tag directly via the integrated Docker registry.</font>
 
The image stream tag has a '''referencePolicy''', which defines how other components should consume this image. The reference policy's '''type''' determines how the image pull spec should be transformed when the image stream tag is used in deployment configuration triggers or new builds. The default value is "Source", indicating the original location of the image should be used (if imported). The user may also specify "Local", indicating that the pull spec should point to the integrated Docker registry and leverage the registry's ability to proxy the pull to an upstream registry. "Local" allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable.
 
==Image Pull Policy==
 
When the container is created, the runtime uses the "imagePullPolicy" to determine whether to pull the image prior to starting the container. More details available here:
 
{{Internal|Pod_Definition#imagePullPolicy|Pod Definition - imagePullPolicy}}
 
==To Process==
 
* https://github.com/openshift/origin/tree/master/examples/image-streams


=Deployment=
=Deployment=


Deployments allow rollbacks to previous versions of an application.
{{External|https://docs.openshift.com/container-platform/latest/architecture/core_concepts/deployments.html}}
 
{{External|https://docs.openshift.com/container-platform/latest/dev_guide/deployments/how_deployments_work.html}}
 
A ''deployment'' is a [[#Replication_Controller|replication controller]] based on a user-defined [[#Template|template]] called a [[#DeploymentConfig|deployment configuration]]: a successful deployment results in a new replication controller being created.
 
A deployment adds extended support for software development and deployment lifecycle. Deployments are created manually or in response to triggered events, and the most common events that trigger a deployment are either an image change or a configuration change.
 
The deployment system provides a [[#DeploymentConfig|deployment configuration]], which is a [[#Template|template]] for deployments, triggers that drive automated deployments in response to events, user-customizable strategies to transition from the previous deployment to the new deployment, rollback procedure, and manual replication scaling. Deployment configuration version increments each time new deployment is created from configuration. Deployments allow defining hooks to be run before and after the replication controller is created.
 
Deployments allow [[#Rollback|rollbacks]].
 
Deployments allow manual replication scaling or autoscaling.
 
The deployments are triggered with [[oc deploy]].


==Deployment Configuration==
==Deployment Configuration==
<span id='DeploymentConfig"></span>
<span id="Deployment_Configuration"></span>A ''deployment configuration'' is a user-defined [[#Template|template]] for performing [[#Deployment|deployments]], which result in running applications. The deployment configuration defines the template for a [[#Pod|pod]]. It manages deploying new images or configuration changes whenever those change. A single deployment configuration is usually analogous to a single micro service.
Each deployment is represented as a [[#Replication_Controller|replication controller]]. The OpenShift environment creates a [[#Replication_Controller|replication controller]] to run the application in response to a deployment configuration. The deployment configuration contains a version number that is incremented each time a replication controller is created from the configuration.
Deployment configurations can support many different deployment patterns, including full restart, customizable rolling updates, and fully custom behaviors, as well as pre- and post-hooks. It supports automatic rolling back to the last successful revision of configuration, in case the current template fails to deploy.
The DeploymentConfig contains:
* [[#Replication_Controller|Replication controller]] definition.
* Default replica count fo the deployment.
* [[#Deployment_Configuration_Triggers|Triggers]] for creating new deployments automatically, in response to events. If no triggers are defined, deployments must be started manually.
* Strategy for transitioning before deployments.
* Life-cycle hooks. Every hook has a failure policy (Abort, Retry, Ignore).
{{Internal|OpenShift DeploymentConfig Definition|DeploymentConfig Definition}}
The DeploymentConfig for a project can be listed with:
[[Oc_get#all|oc get all]]
[[Oc_get#dc|oc get dc]]
===Deployment Triggers===
<span id='Deployment_Configuration_Triggers'></span>
The deployment triggers are specified in the deployment configuration, and can be modified [[OpenShift_Application_Operations#Configuring_Deployment_Triggers_from_Command_Line|from command line]]:
====ConfigChange====
Results in a new deployment, and a new replication controller being created whenever changes are detected to replication controller template of deployment configuration. In the presence of a ConfigChange trigger, the first replication controller is automatically created when the deployment configuration itself is created.
====ImageChange====
The deployment trigger is a change in the image stream. If we do not want the result of a build to be deployed automatically, even if the build pushes a new image in the repository, we simply do not list the "ImageChange" deployment trigger in the deployment configuration.
==Replication Controller==


A ''deployment configuration'' is a user-defined [[#Template|template]] for running applications.
A ''replication controller'' is one of the [[OpenShift Pod Concepts#Controller|pod controller]] types available in OpenShift. It resides on master, and insures that the specified number of pod [[Kubernetes Concepts#Replica|replicas]] defined in the replication controller configuration are running at all times.  


The OpenShift environment creates a [[#Replication_Controller|replication controller]] to run the application in response to a deployment configuration. The deployment configuration contains a version number that is incremented each time a replication controller is created from the configuration.
Logically, the replication controllers constitute the [[#Replication_Layer|replication layer]].  


Support automatic rolling back to the last successful revision of configuration, in case the current template fails to deploy.
The definition of a replication controller includes the number of replicas to be maintained, the pod definition for creating the replicated pod, and a selector for identifying managed pods. If pods exit or are deleted, either explicitly or because the [[#Node|node]] they run on is taken out of service, the replication controller instantiates more pods up to desired number. If there are more pods running than desired, the replication controller deletes as many as necessary. However, it is NOT the replication controller's job to perform autoscaling based on load or traffic. Replication controllers do not exist as physical processes, meaning they do not run in pods, they only exist as entries in etcd, and the master executes the logic.
 
A replication controller is most commonly used to represent a single deployment of part of an application based on a built image.
 
There were situations when a failed deployment can be re-started by deleting the replication controller.
 
The replication controllers of a project can be listed with:
[[Oc_get#all|oc get all]]
[[Oc_get#rc|oc get rc]]
 
===Rollout===
 
A rollout is exposed as a [[#Replication_Controller|replication controller]], and the deployment process manages scaling down old replication controllers and scaling up new ones. Implements one of the [[#Deployment_Strategy|deployment strategies]]. The rollout is performed with:
 
[[OpenShift_Application_Operations#Rollout|oc rollout]]
 
==Deployment Configurations and Replication Controllers==
 
A deployment configuration triggers creation of a [[#Replication_Controller|replication controller]].
 
[[OpenShift_Deployment_Operations#Deleting_a_Deployment_Configuration|Deleting a deployment configuration]] will automatically delete the [[#Replication_Controller|replication controllers]] generated by that deployment configuration.
 
==Rollback==
 
[[#Deployment|Deployments]] allow ''rollbacks'' to previous versions of an application: when one deployment is superseded by another, the previous replication controller is retained, with its number of replicas set to 0. When triggered - the template fails to deploy -, the rollback reverts an application to the last successful deployment. It is done with [[oc rollback]], API or web console.


==Deployment Strategy==
==Deployment Strategy==


The [[#Deployment_Configuration|deployment configuration]] defines a ''deployment strategy''. The deployment strategy determines the deployment process. It uses readiness checks to determine if a new host is ready for use. If the readiness check fails, the deployment configuration retries until it times out.
The [[#Deployment_Configuration|deployment configuration]] defines a ''deployment strategy''. The deployment strategy determines the deployment process and it is defined by the deployment configuration. The deployment strategy uses readiness checks to determine if a new host is ready for use. If the readiness check fails, the deployment configuration retries until it times out. The readiness timeout value is set in deployment configuration.
 
The deployment strategy is implemented during the [[#Rollout|rollout]] process.


===Rolling Deployment Strategy===
===Rolling Deployment Strategy===


The default deployment strategy. It performs rolling updates. It supports life-cycle hooks for injecting code into the deployment process.
The default deployment strategy, if a deployment strategy is not explicitly specified in the [[#DeploymentConfig|deployment configuration]]. It performs rolling updates. It supports life-cycle hooks for injecting code into the deployment process.


It consists in the following steps:
It consists in the following steps:
* Execute the "pre" life-cycle hook.
* Execute the "pre" life-cycle hook.
* Scale up new deployment by one or more pods, based on <tt>maxSurge</tt> value.
* Scale up new deployment by one or more pods, based on <tt>maxSurge</tt> value, waiting until all readiness checks complete.
* Wait for readiness checks to complete.
* Scale down the old deployment by one or more pods, based on <tt>maxUnavailable</tt> value.
* Scale down the old deployment by one or more pods, based on <tt>maxUnavailable</tt> value.
* Repeat scaling until the new deployment reaches desired replica count and the old deployment has scaled to zero.
* Repeat scaling until the new deployment reaches desired replica count and the old deployment has scaled to zero.
* Execute any "post" life-cycle hook.
* Execute any "post" life-cycle hook.
When scaling down, the strategy waits for pods to become ready, so it can decide if can further scaling would affect availability. If scaled up pods never become ready, the deployment times up and results in a deployment failure.


===Recreate Deployment Strategy===
===Recreate Deployment Strategy===
Recreate strategy is appropriate when the application does not support old versions and new versions running together, or when the application uses ReadWriteOnce volumes that do not support sharing between multiple replicas.
The recreate strategy implies downtime: there is a time interval when on application instance is running.


It consists in the following steps:
It consists in the following steps:
* Execute "pre" life-cycle hook.
* Execute "pre" life-cycle hook.
* Scale own previous deployment to zero.
* Scale down previous deployment to zero.
* Scale up new deployment.
* Scale up new deployment.
* Execute "post" life-cycle hook.
* Execute "post" life-cycle hook.
During scale up, if the replica count of the deployment is greater than one, the first deployment replica is validated for readiness before fully scaling up the deployment. If this validation fails, the deployment fails.
...
kind: DeploymentConfig
spec:
  strategy:
    activeDeadlineSeconds: 21600
    type: Recreate
    recreateParams:
      pre:
      mid:
      post:
      timeoutSeconds:   
Parameters:
* '''activeDeadlineSeconds''':  the duration in seconds that the deployer pods for this deployment may bee active on a node before the system actively tries to terminate them.


===Custom Deployment Strategy===
===Custom Deployment Strategy===
Allows for custom commands.
The optional "command" array overrides the [[Dockerfile#CMD|CMD]] directive specified in the Dockerfile.
The optional "environment" variables are added to the strategy process' execution.
==Deployment Operations==
{{Internal|OpenShift Deployment Operations|Deployment Operations}}
==Environment Variables to Use for Strategy Process==
* OPENSHIFT_DEPLOYMENT_NAME
* OPENSHIFT_DEPLOYMENT_NAMESPACE
==To Process==
<font color=red>
* Application release strategies for Red Hat OpenShift Container Platform 3 https://www.redhat.com/cms/managed-files/cl-openshift-container-platform-application-release-strategies-reference-architecture-f6609-201704-en.pdf
</font>
=Region=
=Zone=
=High Availability (HA)=
==Infrastructure HA==
See [[#Master_HA|Master HA]].
==Application HA==
OpenShift insures high availability by deploying the same image in multiple containers across multiple hosts and load balancing among them. This technique also provides horizontal scalability for a service packaged into an [[Docker_Concepts#Container_Image|image]].
=Installation=
There are two installation procedures: RPM and Containerized.
An RPM installation installs all services through package management and configures services to run within the same user space,
A containerized installation installs services using container images and runs separate services in individual containers.
For practical details on installing various OpenShift version, see: {{Internal|OpenShift Installation#Subjects|OpenShift Installation}}
Related:
{{Internal|OpenShift Logging Concepts#Installation|OpenShift Logging Installation}}
Installation is performed by Ansible, usually deployed on the environment's support server. Ansible configuration is available under /etc/ansible and the installation logic under /usr/share/ansible.
=Metrics=
{{Internal|OpenShift Metrics Concepts|OpenShift Metrics Concepts}}
=Horizontal Pod Autoscaler (HPA)=
{{External|https://docs.openshift.com/container-platform/latest/dev_guide/pod_autoscaling.html#dev-guide-pod-autoscaling}}
=Logging=
{{Internal|OpenShift Logging Concepts|OpenShift Logging Concepts}}
=Events=
{{External|https://docs.openshift.com/container-platform/3.5/dev_guide/events.html}}
OpenShift ''events'' incapsulate information about specific conditions detected in the OpenShift cluster, and allow the OpenShift management facilities to record information about those occurrences in a resource-agnostic manner. They also allow administrators and developers to consume information about system components in a unified way. A list of events can be obtained with:
[[Oc_get#events|oc get events]] [-n ''<project-name>'']
Events for a project can be reviewed by navigating with the web console to the project -> Monitoring -> Right Side.
Events contain:
* ''type'': Normal, Warning
* <font color=red>''kind'': Configuration, Node, Pod, DaemonSet, Container, Health, Image, Image Manager, System.</font>
* ''[[#Event_Reasons|reason]]''
* ''source''
* ''message''
==Event Reasons==
* FailedScheduling
* <span id='OutOfDisk'></span>OutOfDisk
* MatchNodeSelector
* SuccessfulCreate
=Command Line Tools=
{{Internal|OpenShift Command Line Tools|OpenShift Command Line Tools (CLI)}}
=Admission Control=
{{External|https://docs.openshift.com/container-platform/latest/architecture/additional_concepts/admission_controllers.html}}
Admission control plug-ins intercept requests to the master API, after authentication and authorization was enforced. There's a chain of plug-ins (also known as the ''admission chain''), and if a plug-in rejects the request, the request fails. The plug-in may modify the request object, and related resources. The default list of admission control plug-ins is configured in the [[Master-config.yml#pluginConfig|master-config.yaml's admissionConfig/pluginConfig]] section.
Customizable admission control plug-ins:
* '''BuildDefaults''' https://docs.openshift.com/container-platform/latest/install_config/build_defaults_overrides.html#ansible-setting-global-build-defaults
* '''ProjectRequestLimit''' https://docs.openshift.com/container-platform/latest/admin_guide/managing_projects.html#limit-projects-per-user
* '''RestrictSubjectBindings''' https://docs.openshift.com/container-platform/latest/admin_solutions/user_role_mgmt.html#role-binding-restriction
* Pod placement https://docs.openshift.com/container-platform/latest/admin_guide/scheduling/scheduler.html#controlling-pod-placement
* Init containers https://docs.openshift.com/container-platform/latest/architecture/core_concepts/containers_and_images.html#init-containers
=OpenShift and JBoss=
{{Internal|OpenShift and JBoss|OpenShift and JBoss}}
=Jobs=
{{External|https://docs.openshift.com/container-platform/latest/dev_guide/jobs.html}}
apiVersion: batch/v1
kind: Job
=Cron Jobs=
{{External|https://docs.openshift.com/container-platform/latest/dev_guide/cron_jobs.html}}
apiVersion: batch/v1
kind: CronJob
=Configuration Data Externalization=
Configuration data must never be stored in application source code, but it must externalized on storage.
==Environment Variables==
{{Internal|OpenShift Environment Variables|Environment Variables}}
=Custom Metadata Annotations=
Custom metadata annotations can be specified in the [[OpenShift_DeploymentConfig_Definition#labels|spec.template.metadata.annotations]] section of the [[#Deployment_Configuration|deployment configuration]]. Those annotations will be injected into the pod metadata, and will be accessible from the pod with the [[OpenShift Volume Concepts#Downward_API|Downward API]].
=Common Images=
* [[MySQL and OpenShift#Overview|MySQL]]

Latest revision as of 01:44, 1 January 2022

DEPLETE TO

OpenShift Concepts

External

Internal

OpenShift Workflow

Users or automation make calls to the REST API via the command line, web console or programmatically, to change the state of the system. The master analyses the state changes and acts to bring the state of the system in sync with the desired state. The state is maintained in the store layer. Most OpenShift commands and API calls do not require the corresponding actions to be performed immediately. They usually create or modify a resource description in etcd. etcd then notifies Kubernetes or the OpenShift controllers, which notify the resource about the change. Eventually, the system state reflects the change.

OpenShift functionality is provided by the interaction of several layers:

  • The store layer holds the state of the OpenShift environment, which includes configuration information, current state and the desired state. It is implemented by etcd.
  • The authentication layer provides a framework for collaboration and quota management.
  • Scheduling is OpenShift master's main function and it is implemented in the scheduling layer, which contains the scheduler.
  • The service layer handles internal requests. It provides an abstraction of service endpoints and provides internal load balancing. The pods get IP addresses associated with the service endpoints. Note that the external requests are not handled by the service layer, but by the routing layer.
  • The routing layer handles external requests, to and from the applications. For a description of the relationship between a service and a router, see relationship between a service and router.
  • Application requests are routed to the business logic serving them according to the networking workflow, by the networking layer.
  • The replication layer contains the replication controller, whose role is to ensure that the number of instances and pods defined in the store layer actually exists.

Objects

Core Objects

Defined by Kubernetes, and also used by OpenShift. All these objects have an API type, which is listed below.

OpenShift Objects

Defined by OpenShift, outside Kubernetes. These objects also have an API type, which is listed below.

Other Objects that do not have an API Representation

OpenShift Hosts

Master

A master is a RHEL or Red Hat Atomic host that orchestrates and schedules resources.

The master maintains the state of the OpenShift environment.

The master provides the single API all tooling clients must interact with. All OpenShift tools (CLI, web console, IDE plugins, etc. speak directly with the master).

The access is protected via fine-grained role-based access control (RBAC).

The master monitors application health via user-defined pod probes, insuring that all containers that should be running are running. It handles restarting pods that failed probes automatically. Pods that fail too often are marked as "failing" and are temporarily not restarted. The OpenShift service layer sends traffic only to healthy pods.

Kubernetes Master

Masters use the etcd cluster to store state, and also cache some of the metadata in their own memory space.

Master HA

Multiple masters can be present to insure HA. A typical HA configuration involves three masters and three etcd nodes. Such a topology is built by the OpenShift 3.5 ansible inventory file shown here.

An alternative HA configuration consists in a single master node and multiple (at least three) etcd nodes.

The native HA Method

Master API

The master API is available internally (inside the OpenShift cluster) at https://openshift.default.svc.cluster.local This value is available internally to pods as the KUBERNETES_SERVICE_HOST environment variable.

Node

A node is a Linux container host. It is based on RHEL or Red Hat Atomic and provides a runtime environment where applications run inside containers, which run inside pods assigned by the master. Nodes are orchestrated by masters and are usually managed by administrators and not by end users. Nodes can be organized into different topologies. From a networking perspective, nodes gets allocated their own subnets of the cluster network, which then they use to distribute to the pods, and containers, running on them. More details about OpenShift networking is available here.

A node daemon runs on node each node.

Node operations:

TODO:

  • What is the difference between the kubelet and the node daemon?
  • kube proxy daemons.

Infrastructure Node

Also referred to as "infra node".

This is where infrastructure pods run. Metrics, logging, routers are considered infrastructure pods.

The infrastructure nodes, especially those running the the metrics pods and ... should be closely monitored to detect early CPU, memory and disk capacity shortages on the host system.

OpenShift Cluster

All nodes that share the SDN.

Container

https://docs.openshift.com/container-platform/latest/architecture/core_concepts/containers_and_images.html#containers

A container is a kernel-provided mechanism to run one or more processes, in a portable manner, in a Linux environment. Containers are isolated from each other on a host and are initialized from images. All application instances - based on various languages/frameworks/runtimes - as well as databases, run inside containers on nodes. For more details, see

Docker Containers

A pod can have application containers and init containers.

In OpenShift, containers are never restarted. Instead, new containers are spun up to replace old containers when needed. Because of this behavior, persistent storage volumes mounted on containers are critical for maintaining state such as for configuration files and data files.

Docker Support in OpenShift

All containers that are running in pods are managed by Docker servers.

Docker Storage in OpenShift

Each Docker server requires Docker storage, which is allocated on each node in a space specially provisioned as Docker storage backend. The Docker storage is ephemeral and separate from OpenShift storage. The default loopback storage back end for Docker is a thin pool on loopback devices, which is not appropriate for production.

As per OCP 3.7, Docker storage ca be backed by one of two options (storage drivers): a devicemapper thin pool logical volume and an overlay2 filesystem (recommended). More details about how Docker storage is configured on a specific system can be obtained from:

cat /etc/sysconfig/docker-storage

DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker_vg-container--thinpool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true"

and also

docker info

The node Daemon

Pod

Contains: controller, pod configuration, pod lifecycle, terminal state, pod definition, pod name, pod placement, pod probe, liveness probe, readiness probe, local manifest pod, bare pod, static pod, pod type, terminating, nonterminating, pod presents, init container.

Label

Labels are simple key/value pairs that can be assigned to any resource in the system and are used to group and select arbitrarily related objects. Most Kubernetes objects can include labels in their metadata. Labels provide the default way of manage objects as groups, instead of having to handle each object individually. Labels are a Kubernetes concept.

Labels associated with a node can be obtained with oc describe node.

Selector

A selector is a set of labels. It is also referred to as label selector. Selectors are a Kubernetes concept.

Node Selector

A node selector is an expression applied to a node, which, depending on whether the node has or does not have the labels expected by the node selector, allows or prevents pod placement on the node in question during the pod scheduling operation. It is the scheduler that evaluates the node selector expression and decides on which node to place the pod. The DaemonSets also use node selectors when placing the associated pods on nodes.

Node selectors can be associated with an entire cluster, with a project, or with a specific pod. The node selectors can be modified as part of a node selector operation.

Cluster-Wide Default Node Selector

The cluster-wide default node selector is configured during OpenShift cluster installation to restrict pod placement on specific nodes. It is specified in the projectConfig.defaultNodeSelector section of the master configuration file master-config.yml. It can also be modified after installation with the following procedure:

Configure a Cluster-Wide Default Node Selector

Per-Project Node Selector

The per-project node selector is used by the scheduler to schedule pods associated with the project. The per-project node selector and takes precedence over cluster-wide default node selector, when both exist. It is available as "openshift.io/node-selector" project metadata (see below). If "openshift.io/node-selector" is set to an empty string, the project will not have an adminstrator-set node selector, even if the cluster-wide default has been set. This means that a cluster administrator can set a default to restrict developer projects to a subset of nodes and still enable infrastructure or other projects to schedule the entire cluster.

The per-project node selector value can be queried with:

oc get project -o yaml

It is listed as:

...
kind: Project
metadata:
  annotations:
    ...
    openshift.io/node-selector: ""
    ...

The per-project node selector is usually set up when the project is created, as described in this procedure:

Configuring a Per-Project Node Selector during Project Creation

It can also be changed after project creation with oc edit as described in this procedure:

Configuring a Per-Project Node Selector after Project Creation

Per-Pod Node Selector

The declaration of a per-pod node selector can be obtained running:

oc get pod <pod-name> -o yaml

and it is rendered in the "spec:" section of the pod definition:

...
kind: Pod
spec:
  ...
  nodeSelector:
    key-A: value-A


How is the node selector of a pod generated?

TODO: merge with OpenShift_Concepts#Pod_Placement

Once the pod has been created, the node selector value becomes immutable and an attempt to change it will fail. For more details on pod state see Pods.

Precedence Rules when Multiple Node Selectors Apply

TODO

External Resources

Scheduler

https://docs.openshift.com/container-platform/3.5/admin_guide/scheduler.html

Scheduling is the master's main function: when a pod is created, the master determines on what node(s) to execute the pod. This is called scheduling. The layer that handles this responsibility is called the scheduling layer.

The scheduler is a component that runs on master and determines the best fit for running pods across the environment. The scheduler also spreads pod replicas across nodes, for application HA. The scheduler reads data from the pod definition and tries to find a node that is a good fit based on configured policies. The scheduler does not modify the pod, it creates a binding that ties the pod to the selected node, via the master API.

The OpenShift scheduler is based on Kubernetes scheduler.

Most OpenShift pods are scheduled by the scheduler, unless they are managed by a DaemonSet. In this case, the DaemonSet selects the node to run the pod, and the scheduler ignores the pod.

The scheduler is completely independent and exists as a standalone, pluggable solution. The scheduler is deployed as a container, referred to as an infrastructure container. The functionality of the scheduler can be extended in two ways:

  1. Via enhancements, by adding predicates to the priority functions.
  2. Via replacement with a different implementation.

The pod placement process is described here:

Pod Placement

Default Scheduler Implementation

The default scheduler is a scheduling engine that selects the node to host the pod in three steps:

  1. Filter all available nodes by running through a list of filter functions called predicates, discarding the nodes that do not meet the criteria.
  2. Prioritize the remaining nodes by passing through a series of priority functions that assigns each node a score between 0 - 10. 10 signifies the best possible fit to run the pod. By default all priority function are considered equivalent, but they can be weighted differently via configuration.
  3. Sorts the node by score and selects the node with the highest score. If multiple nodes come with the same score, one is chosen at random.

Note that an insight into how the predicates are evaluated and what scheduling decisions are taken can be achieved by increasing the logging verbosity of the master controllers processes.

Predicates

Static Predicates

  • PodFitsPorts - a node is fit fi there are no port conflicts.
  • PodFitsResources - a node is fit based on resource availability. Nodes declare resource capacities, pods specify what resources they require.
  • NoDiskConflict - evaluates if a pod fits based on volumes requested and those already mounted.
  • MatchNodeSelector - a node is fit based on the node selector query.
  • HostName - a node is fit based on the presence of host parameter and string match with host name.

Configurable Predicates

ServiceAffinity

ServiceAffinity filters out nodes that do not belong to the topological level defined by the provided labels. See scheduler.json

LabelsPresence

LabelsPresence checks whether the node has certain labels defined, regardless of value.

Priority Functions

Existing Priority Functions

  • LeastRequestedPriority - favors nodes with fewer requested resources, calculates percentage of memory and CPU requested by pods scheduled on node, and prioritizes nodes with highest available capacity.
  • BalancedResourceAllocation - favors nodes with balanced resource usage rate, calculates difference between consumed CPU and memory as fraction of capacity and prioritizes nodes with the smallest difference. It should always be used with LeastRequestedPriority.
  • ServiceSpreadingPriority - spreads pods by minimizing the number of pods that belong to the same service, onto the same node
  • EqualPriority

Configurable Priority Functions

  • ServiceAntiAffinity
  • LabelsPreference

Scheduler Policy

The selection of the predicates and the priority functions defines the scheduler policy. The default policy is configured in the master configuration file master-config.yml as kubernetesMasterConfig.schedulerConfigFile. By default, it points to /etc/origin/master/scheduler.json. The current scheduling policy in force can be obtained with ?. A custom policy can replace it, if necessary, by following this procedure: Modify the Scheduler Policy.

Scheduler Policy File

The default scheduler policy is configured in the master configuration file master-config.yml as kubernetesMasterConfig.schedulerConfigFile, which by default points to /etc/origin/master/scheduler.json. Another example of scheduler policy file:

Scheduler Policy File

Volumes

Persistent Volumes, Persistent Volume Claims, EmptyDirs, ConfigMaps, Downward API, Host Paths, Secrets:

Volumes

etcd

Implements OpenShift's store layer, which holds the current state, desired state and configuration information of the environment.

OpenShift stores image, build, and deployment metadata in etcd. Old resources must be periodically pruned. If a large number of images/build/deployments are planned, etcd must be placed on machines with large amounts of memory and fast SSD drives.

etcd

etcd and Master Caching

Masters cache deserialized resource metadata to reduce the CPU load and keep the metadata in memory. For small clusters, the cache can use a lot of memory for negligible CPU reduction. The default cache size is 50,000 entries, which, depending on the size of resources, can grow to occupy 1 to 2 GB of memory. The cache size can be configured in master-config.yml.

Image Registries

https://docs.openshift.com/container-platform/latest/install_config/registry/index.html

Integrated Docker Registry

OpenShift contains an integrated Docker image registry, providing authentication and access control to images. The integrated registry should not be confused with the OpenShift Container Registry (OCR), which is a standalone solution, which can be used outside of an OpenShift environment. The integrated registry is an application deployed within the "default" project as a privileged container, as part of the cluster installation process. The integrated registry is available internally in the OpenShift cluster at the following Service IP: docker-registry.default.svc:5000/. It consists of a a "docker-registry" service, a "docker-registry" deployment configuration, a "registry" service account and a role binding that associates the service account to the "system:registry" cluster role. More details on the structure of the integrated registry can be generated with oadm registry -o yaml. The default registry also comes with a registry console that can be used to browse images.

The main function of the integrated registry is to provide a default location where users can push images they build. Users push images into registry and whenever a new image is stored in the registry, the registry notifies OpenShift about it and passes along image information such as the namespace, the name and the image metadata. Various OpenShift components react by creating new builds and deployments.

Alternatively, the integrated registry may store and expose to projects external images the projects may need. The docker servers on OpenShift nodes are configured to be able to access the internal registry, but they are also configured with registry.access.redhat.com and docker.io.

Multiple running instances of the container registry require backend storage supporting writes by multiple processes. If the chosen infrastructure provider does not contain this ability, running a single instance of a container registry is acceptable.

Questions:

  • How can I "instruct" an application to use a specific registry. a “lab” application to use registry-lab
  • How do pods that need images go to that registry and not other? The answer to this question lays in the way an application uses the registry.

External Registries

The external registries registry.access.redhat.com and docker.io are accessible. In general, any server implementing Docker registry API can be integrated as a stand-alone registry.

registry.access.redhat.com

registry.access.redhat.com

The Red Hat registry is available at http://registry.access.redhat.com. An individual image can be inspected with an URL similar to https://access.redhat.com/containers/#/registry.access.redhat.com/jboss-eap-7/eap70-openshift/images/1.5-23

docker.io

Docker Hub is available at http://dockerhub.com.

Stand-alone Registry

https://docs.openshift.com/container-platform/latest/install_config/install/stand_alone_registry.html#install-config-installing-stand-alone-registry

Any server implementing Docker registry API can be integrated as a standalone registry: a stand-alone container registry can be installed and used with the OpenShift cluster.

Integrated Registry Console

https://docs.openshift.com/container-platform/latest/install_config/registry/deploy_registry_existing_clusters.html#registry-console

The registry comes with a "registry console" that allows web-based access to the registry. An URL example is https://registry-console-default.apps.openshift.novaordis.io. The registry console is deployed as a registry-console pod part of the "default" project.

Registry Operations

Registry Operations

Service

OpenShift Service Concepts

API

Kubernetes API

Networking

In OpenShift, nodes communicate to each over using the infrastructure-provided network. Pods communicate with each other via a software defined network (SDN), also known as the cluster network, which runs in top of that.

Required Network Ports

https://docs.openshift.com/enterprise/3.1/install_config/install/prerequisites.html#prereq-network-access

Default Network Interface on a Host

Every time OpenShift refers to the "default interface" on a host, it means the network interface associated with the default route. This is the logic that figures it out is here:

Default Network Interface on a Host

SDN, Overlay Network

https://docs.openshift.com/container-platform/latest/architecture/additional_concepts/sdn.html#architecture-additional-concepts-sdn
https://docs.openshift.com/container-platform/latest/install_config/configuring_sdn.html

All hosts in the OpenShift environment are clustered and are also members of the overlay network based on a Software Defined Network (SDN). Each pod gets its own IP address, by default from the 10.128.0.0/14 subnet, that is routable from any member of the SDN network. Giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration and migration.

However, it is not recommended that a pod talks to another directly by using the IP address. Instead, they should use services as an indirection layer, and interact with the service, that may be deployed on different pods at different times. Each service also gets its own own IP address from the 172.30.0.0/16 subnet.

The Cluster Network

The cluster network is the network from which pod IPs are assigned. These network blocks should be private and must not conflict with existing network blocks in your infrastructure that pods, nodes or the master may require access to. The default subnet value is 10.128.0.0/14 (10.128.0.0 - 10.131.255.255) and it cannot be arbitrarily reconfigured after deployment. The size and address range of the cluster network, as well as the host subnet size are configurable during installation. Configured with 'osm_cluster_network_cid' at installation.

Samples of pod addresses can be obtained by querying service endpoints in a project:

oc get endpoints
NAME           ENDPOINTS          AGE
some-service   10.130.1.17:8080   21m

The master maintains a registry of nodes in etcd, and each node gets allocated upon creation an unused subnet from the cluster network. Each node gets a /23 subnet, which means the cluster can allocate 512 subnets to nodes, and each node has 510 addresses to assign to containers running on it. Once the node is registered with the master, and gets its cluster network subnet, SDN creates on the node three devices:

  • br0 - the OVS bridge device that pod containers will be attached to. The bridge is configured with a set of non-subnet specific flow rules.
  • tun0 - an OVS internal port (port 2 on br0). This gets assigned the cluster subnet gateway address, and it is used for external access. The SDN configures net filter and routing rules to enable access from the cluster subnet to the external network via NAT.
  • vxlan_sys_4789. This is the OVS VXLAN device (port 1 on br0) which provides access to containers on remote nodes. It is referred to as "vxlan0".

Each time a pod is started, the SDN assigns the pod a free IP address from the node's cluster subnet, attaches the host side of the pod's veth interface pair to the OVS bridge br0, adds OpenFlow rules to the OVS database to route traffic addressed to the new pod to the correct OVS port. If a ovs-multitenant plug-in is active, it also adds OpenFlow rules to tag traffic coming from the pod with the pod's VIND, and to allow traffic into the pod if the traffic's VIND matches the pod's VIND, or it is the privileged VIND 0.

Nodes update their OpenFlow rules in response to master's notification in case new nodes are added or leave. When a new subnet is added, the node adds rules on br0 so that packets with a destination IP address the remote subnet go to vxlan0 (port 1 on br0) and thus out onto the network. The ovs-subnet plug-in sends all packets across the VXLAN with VNID 0, but the ovs-multitenant plug-in uses the appropriate VNID for the source container.

The SDN does not allow the master host (which is running the OVS processes) to access the cluster network, so a master does not have access to pods via the cluster network, unless it is also running a node.

When ovs-multitenant plug-in is active, the master also allocates VXLAN VNIDs to projects. VNIDs are used to isolate the traffic.

Packet Flow

TODO: https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/sdn.html#sdn-packet-flow

Pod IP Address

A cluster network IP address, that gets assigned to a pod.

The Services Subnet

OpenShift uses a "services subnet", also known as "kubernetes services network", in which OpenShift Services will be created within the SDN. This network block should be private and must not conflict with any existing network blocks in the infrastructure to which pods, nodes, or the master may require access to. It defaults to 172.30.0.0/16. It cannot be re-configured after deployment. If changed from the default, it must not be 172.16.0.0/16, which the docker0 network bridge uses by default, unless the docker0 network is also modified. It is configured with 'openshift_master_portal_net', 'openshift_portal_net' at installation and populates the master configuration file servicesSubnet element. Note that Docker expects its insecure registry to available on this subnet.

The services subnet IP address of a specific service can be displayed with:

oc describe service <service-name>

or

oc get svc <service-name>

The Service IP Address

An IP address from the services subnet that is associated with a service. The service IP address is reported as "Cluster IP" by:

oc get svc <service-name>

This makes sense if we think about a service as a cluster of pods that provide the service.

Example:

oc get svc logging-es
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
logging-es   172.30.254.155   <none>        9200/TCP   31d

Docker Bridge Subnet

Default 172.17.0.0/16.

Network Plugin

OpenShift Network Plugins

Open vSwitch

Open vSwitch

DNS

OpenShiftDNSConcepts.png

External DNS Server

An external DNS server is required to resolve the public wildcard name of the environment - and as consequence, all public names of various application access points - to the public address of the default router, as per networking workflow. If more than one router is deployed, the external DNS server should resolve the public wildcard name of the environment to the public IP address of the load balancer. For more details about initial configuration details see External DNS Setup.

Optional Support DNS Server

An optional support DNS server can be setup to translate local hostnames (such as master1.ocp36.local) to internal IP addresses allocated to the hosts running OpenShift infrastructure. There is no good reason to make these addresses and names publicly accessible. The OpenShift advanced installation procedure factors it in automatically, as long as the base host images OpenShift is installed in top of is already configured to use it to resolve DNS names via /etc/resolv.conf or NetworkManager. No additional configuration via openshift_dns_ip is necessary.

Internal DNS Server

A DNS server used to resolve local resources. The most common use is to translate service names to service addresses. The internal DNS server listens on the service IP address 172.30.0.1:53. It is a SkyDNS instance built into OpenShift. It is deployed on the master and answers the queries for services. The naming queries issued from inside containers/pods are directed to the internal DNS service by Dnsmasq instances running on each node. The process is described below.

The internal DNS server will answer queries on the ".cluster.local" domain that have the following form:

  • <namespace>.cluster.local
  • <service>.<namespace>.svc.cluster.local - service queries
  • <name>.<namespace>.endpoints.cluster.local - endpoint queries

Service DNS can still be used and responds with multiple A records, one for each pod of the service, allowing the client to round-robin between each pod.

Dnsmasq

Dnsmasq is deployed on all nodes, including masters, as part of the installation process, and it works as a DNS proxy. It binds on the default interface, which is not necessarily the interface servicing the physical OpenShift subnet, and resolves all DNS requests. All OpenShift "internal" names - service names, for example - and all unqualified names are assumed to be in the "namespace.svc.cluster.local", "svc.cluster.local", "cluster.local" and "local" domains and forwarded to the internal DNS server. This is done by Dnsmasq, as configured from /etc/dnsmasq.d/origin-dns.conf:

server=/cluster.local/172.30.0.1 

This configuration tells Dnsmasq to forward all queries for names in the "cluster.local" domain and sub-domains to the internal DNS server listening on 172.30.0.1.

This is an example of such query (where 192.168.122.17 is the IP address of the local node on which Dnsmasq binds, and 172.30.254.155 is a service IP address) :

nslookup logging-es.logging.svc.cluster.local
Server:		192.168.122.17
Address:	192.168.122.17#53

Name:	logging-es.logging.svc.cluster.local
Address: 172.30.254.155

The non-OpenShift name requests are forwarded to the real DNS server stored in /etc/dnsmasq.d/origin-upstream-dns.conf:

server=192.168.122.12

The configuration file /etc/dnsmasq.d/origin-dns.conf is deployed at installation, while /etc/dnsmasq.d/origin-upstream-dns.conf is created dynamically every time the external DNS changes, as it is the case for example when the interface receives a new DHCP IP address.

For more details, see

Dnsmasq

Container /etc/resolv.conf

The container /etc/resolv.conf is created while the container is being assembled, based on information available in node-config.yamll. Also see:

External OpenShift DNS Resources

Routing Layer

OpenShift networking provides a platform-wide routing layer that directs outside traffic to the correct pods IPs and ports. The routing layer cooperates with the service layer. Routing layer components run themselves in pods. They provide automated load balancing to pods, and routing around unhealthy pods. The routing layer is pluggable and extensible. For more details about the overall OpenShift workflow, see "OpenShift Workflow".

Router

https://docs.openshift.com/container-platform/3.5/install_config/router/index.html
https://docs.openshift.com/container-platform/3.5/architecture/core_concepts/routes.html
https://docs.openshift.com/container-platform/latest/install_config/router/index.html

The router service routes external requests to applications inside the OpenShift environment. The router service is deployed as one or more pods. The pods associated with the router service are deployed as infrastructure containers on infrastructure nodes. A router is usually created during the installation process. Additional routers can be created with oadm router.

The router is the ingress point for all traffic flowing to backing pods implementing OpenShift Services. Routers run in containers, which may be deployed on any node (or nodes) in an OpenShift environment, as part of the "default" project. A router works by resolving fully qualified DNS external names ("kibana.apps.openshift.novaordis.io") external requests are associated with into pod IP addresses and proxying the requests directly into the pods that back the corresponding service. The router gets pod IPs from the service, and proxies the requests to pods directly, not through the Service.

Routers directly attach to port 80 and 443 on all interfaces on a host, so deployment of the corresponding containers should be restricted to the infrastructure hosts where those points are available. Statistics are usually available on port 1936.

Relationship between Service and Router

When the default router is used, the service layer is bypassed. The Service is used only to find which pods the service represents. The default router does the load balancing by proxying directly to the pod endpoints.

OpenShiftRouterConcepts.png

Sticky Sessions

The router configuration determines the sticky session implementation. The default HAProxy template implements sticky session using balance source directive.

Router Implementations

HAProxy Default Router

https://docs.openshift.com/container-platform/3.5/architecture/core_concepts/routes.html#haproxy-template-router

By default, the router process is an instance of HAProxy, referred to as the "Default Router". It uses "openshift3/OpenShift Container Platform-haproxy-router" image, and it supports unsecured, edge-terminated, re-encryption terminated and passthrough terminated routes matching on HTTP host and request path.

F5 Router

The F5 Router integrates with existing F5 Big-IP systems.

Router Operations

Information about the existing routers can be obtained with:

 oadm router -o yaml

More details on router operations are available here:

Router Operations

Route

Logically, a route is an external DNS entry, either in a top level domain or a dynamically allocated name, that is created to point to an OpenShift service so that the service can be accessed outside the cluster. The administrator may configure one or more routers to handle those routes, typically through an Apache or HAProxy load balancer/proxy.

The route object describes a service to expose by giving it an externally reachable DNS host name.

A route is a mapping of an FQDN and path to the endpoints of a service. Each route consist of a route name, a service selector and an optional security configuration:

NAME             HOST/PORT                            PATH      SERVICES         PORT      TERMINATION          WILDCARD
logging-kibana   kibana.apps.openshift.novaordis.io             logging-kibana   <all>     reencrypt/Redirect   None

A route can be unsecured or secured.

The secure routes specify the TLS termination of the route, which relies on SNI (Server Name Indication). More specifically, they can be:

  • edge secured (or edge terminated). Occurs at the router, prior to proxying the traffic to the destination. The front end of the router serves the TLS certificates, so they must be configured in the route. If the certificates are not configured in the route, the default certificate of the router is used. Connection from the router into the internal network are not encrypted. This is an example of edge-termintated route.
  • passthrough secured (or passthrough terminated). The encrypted traffic is send to to destination, the router does not provide TLS termination, hence no keys or certificates are required on the router. The destination will handle certificates. This method supports client certificates. This is an example of passthrough-termintated route.
  • re-encryption terminated secured. This is an example of re-encyrption-termintated route.
Route Definition

Routes can be displayed with the following commands:

oc get all
oc get routes

Routes can be created with API calls, an JSON or YAML object definition file or the oc expose service command.

Path-Based Route

A path-based route specifies a path component, allowing the same host to server multiple routes.

Route with Hostname

Routes let you associate a service with an externally reachable hostname. If the hostname is not provided, OpenShift will generate on based on the <routename>-.<namespace>].<suffix> pattern.

Default Routing Subdomain

The suffix and the default routing subdomain can be configured in the master configuration file.

Route Operations

Route Operations

Networking Workflow

The networking workflow is implemented at the networking layer.

  • A user requests a page by pointing the browser to http://myapp.mydomain.com
  • The external DNS server resolves that request to the IP address of one of the hosts that host the default router. That usually requires a wildcard C name in the DNS server pointed to the node that hosts the router container.
  • Default Router selects the pod from the list of pods listed by the service and acts as a proxy for that pod (this is where the port can be translated from the external 443 to the internal 8443, for example).

Resources

  • cpu, requests.cpu
  • memory, requests.memory
  • limits.cpu
  • memory.cpu
  • pods
  • replicationcontrollers
  • resourcequotas
  • services
  • Secrets
  • configmaps
  • persistentvolumeclaims
  • openshif.io/imagestreams

Most resources can be defined in JSON or YAML files, or via an API call. Resources can be exposed via the downward API to the container.

Resource Management

Resource Management

Project

Projects allows groups of users to work together, define ownership of resources and manage resources - they can be seen as the central vehicle for managing regular users' access to resources. The project restricts and tracks use of resources with quotas and limits. A project is a Kubernetes namespace with additional annotations. A project can contain any number of containers of any kind, and any grouping or structure can be enforced using labels. The project is the closest concept to an application, OpenShift does not know of applications, though it provides a new-app command. A project lets a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators—unless they are given permission to create projects, in which case they automatically have access to their own projects.

Most objects in the system are scoped by a namespace, with the exception of:

More details about OpenShift types are available here OpenShift types.

Each project has its own set of objects: pods, services, replication controllers, etc. The names of each resource are unique within a project. Developers may request projects to be create, but administrators control the resources allocated to the projects.

What actions a user can or cannot perform on a project's objects are specified by policies.

Constraints are quotas for each kind of object that can be limited.

New projects are created with:

oc new-project

Current Project

The current project is a concept that applies to oc, and specifies the project oc commands apply to, without to explicitly having to use the -n <project-name> qualifier. The current project can be set with oc project and read with oc status. The current project is part of the CLI tool's current context, maintained in user's .kube/config.

Global Project

In the context of an ovs-multitenant SDN plugin, a project is global if if can receive cluster network traffic from any pods, belonging to any project, and it can send traffic to any pods and services. Is the default project a global project?. A project can be made global with:

oadm pod-network make-projects-global <project-1-name> <project-2-name>

Standard Projects

Default Project

The "default" project is also referred to as "default namespace". It contains the following pods:

  • the integrated docker registry pod (memory consumption based on a test installation: 280 MB)
  • the registry console pod (memory consumption based on a test installation: 34 MB)
  • the router pod (memory consumption based on a test installation: 140 MB)

In case the ovs-multitenant SDN plug-in is installed, the "default" project has VNID 0 and all its pods can send and receive traffic from any other pods.

The "default" project can be used to store a new project template, if the default one needs to be modified. See Template Operations - Modify the Template for New Projects.

"logging" Project

If logging support is deployed at installation or later, the participating pods (kibana, ElasticSearch, fluentd, curator) are members of the "logging" project.

This is the memory consumption based on a test installation:

  • kibana pod: 95 MB
  • elasticsearch pod: 1.4GB
  • curator pod: 10 MB
  • fluentd pods max 130 MB

"openshift-infra" Project

Contains the metrics components:

"openshift" Project

Contains standard templates and image streams. Images to use with OpenShift projects can be installed during the OpenShift installation phase, or they can be added later running command similar to:

oc -n openshift import-image jboss-eap64-openshift:1.6

Other Standard Projects

  • "kube-system"
  • "management-infra"

Projects and Applications

Each project may have multiple applications, and each application can be managed individually.

A new application is created with:

oc new-app

The "app" Label

TODO: app vs. application

There is no OpenShift object that represent an applications. However, the pods belonging to a specific application are grouped together in the Web console project window, under the same "Application" logical category. The grouping is done based on the "app" label value: all pods with the same "app" label value are represented as belonging to the same "application". The "app" label value can be explicitly set in the template that is used to instantiate the application with, in the "labels:" section. Some templates expose this as parameter, so it can be set on command line with a syntax similar to:

--param APPLICATION_NAME=<application-name>

Application Operations

Application Operations

Project Operations

Project Operations

Template

https://docs.openshift.com/container-platform/latest/dev_guide/templates.html
https://docs.openshift.org/latest/dev_guide/templates.html

A template is a resource that describes a set of objects that can be parameterized and processed, so they can be created at once. The template can be processed to create anything within a project, provided that permissions exist. The template may define a set of labels to apply to every object defined in the template. A template can use preset variables or randomized values, like passwords.

Templates can be stored in, and processed from files, or they can be exposed via the OpenShift API and stored as part of the project. Users can define their own templates within their projects.

The objects define in a template collectively define a desired state. OpenShift's responsibility is to make sure that the current state matches the desired state.

Template Definition

Specifying parameters in a template allows all objects instantiated by the template to see consistent values for these parameters when the template is processed. The parameters can be specified explicitly, or generated automatically.

The configuration can be generated from template with oc process command.

Most templates use pre-built S2I builder images, that includes the programming language runtime and its dependencies. These builder images can also be used by themselves, without the corresponding template, for simple use cases.

New Project Template

The master provisions projects based on the template that is identified by the "projectRequestTemplate" in master-config.yaml file. If nothing is specified there, new projects will be created based on a built-in new project template that can be obtained with:

oadm create-bootstrap-project-template -o yaml

New Application Template

oc new-app <template-name>

Template Libraries

Template Operations

Template Operations

Build

https://docs.openshift.com/container-platform/latest/architecture/core_concepts/builds_and_image_streams.html

A build is the process of transforming input parameters into a resulting object. In most cases, that means transforming source code, other images, Dockerfiles, etc. into a runnable image. A build is run inside of a privileged container and has the same restrictions normal pods have. A build usually results in an image pushed to a Docker registry, subject to post-build tests. Builds are run with the builder service account, which must have access to any secrets it needs, such as repository credentials - in addition to the builder service account having to have access to the build secrets, the build configuration must contain the required build secret. Builds can be triggered manually or automatically.

OpenShift supports several types of builds. The type of build is also referred to as the build strategy:

Builds for a project can be reviewed by navigating with the web console to the project -> Builds -> Builds or invoking oc get builds from command line. Note these are actual executed builds, not build configurations.

Build Strategy

Docker Build

Docker can be used to build images directly. The Docker build expects a repository with Dockerfile and all artifacts required to produce runnable image. It invokes the docker build command, and the result is a runnable image.

Create a Docker Build

Source-to-Image (S2I) Build

https://docs.openshift.com/container-platform/latest/architecture/core_concepts/builds_and_image_streams.html#source-build
https://docs.openshift.com/container-platform/latest/creating_images/s2i.html#creating-images-s2i

A source to image build strategy is a process that takes a base image, called the builder image, application source code and S2I scripts, compiles the source and injects the build artifacts into the builder image, producing a ready-to-run new Docker image, which is the end product of the build. The image is then pushed into the integrated Docker registry.

The essential characteristic of the source build is that the builder image provides both the build environment and the runtime image in which the build artifact is supposed to run in. The build logic is encapsulated within the S2I scripts. The S2I scripts usually come with builder image, but they can also be overridden by scripts placed in the source code or in different location, specified in the build configuration. OpenShift supports a wide variety of languages and base images: Java, PHP, Ruby, Python, Node.js. Incremental builds are supported. With a source build, the assemble process performs a large number of complex operations without creating a new layer at each step, resulting in a compact final image.

The source strategy is specified in the build configuration as follows:

kind: BuildConfig
spec: 
  strategy:
     type: Source
     sourceStrategy:
       from:
          kind: ImageStreamTag
          name: jboss-eap70-openshift:1.5
          namespace: openshift
  source:
    type: Git
    git:
      uri: ...
  output:
    to:
      kind: ImageStreamTag
      name: <app-image-repository-name>:latest

The builder image is specified with a 'spec.strategy.sourceStrategy.from' element. Source code origin is specified with 'spec.strategy.source' element. Assuming that the build is successful, the resulted image is pushed into the ? image stream and tagged with the 'output' image stream tag.

A working S2I Build Example is available here:

OpenShift Build and Deploy a JEE Application with S2I

An example to create a source build configuration from scratch is available here:

Create a Source Build

The Build Process

For a source build, the build process takes place in the builder image container. It consists in the following steps:

  • Download the S2I scripts.
  • If it is an incremental build, save the previous build's artifacts with save-artifacts.
  • A work directory is created.
  • Pull the source code from the repository into the work directory.
  • Heuristics is applied to detect how to build the code.
  • Create a TAR that contains S2I scripts and source code.
  • Untar S2I scripts, sources and artifacts.
  • Invoke the assemble script.
  • The build process changes to 'contextDir' - anything that resides outside 'contextDir' is ignored by the build.
  • Run build.
  • Push image to docker-registry.default.svc:5000/<project-name>/...

Builder Image

The builder image is an image that must contain both compile time dependencies and build tools, because the build process take place inside it, and runtime dependencies and the application runtime, because the image will be used to create containers that execute the application. The builder image must also contain the tar archiving utility, available in $PATH, which is used during the build process to extract source code and S2I scripts, and the /bin/sh command line interpreter. A builder image should be able to generate some usage information by running the image with docker run. An article that shows how to create builder images is available here: https://blog.openshift.com/create-s2i-builder-image/.

Extended Build

An extended build uses a builder image and a runtime image as two separate images. In this case the builder image contains the build tooling but not the application runtime. The runtime image contains the application runtime. This is useful when we don't want the build tooling laying around in the runtime image.

S2I Scripts

The S2I scripts encapsulate the build logic and must be executable inside the builder image. They must be provided as an input of the source build and play an essential role in the build process. They come from one of the following locations, listed below in the inverse order of their precedence (if the same script is available in more than one location, the script from the location listed last in the list is used):

  • Bundled with the builder image, in a location specified as "io.openshift.s2i.scripts-url" label. A common value is "image:///usr/local/s2i". As an example, these are the S2I scripts that come with an EAP7 builder image: assemble, run and save-artifacts.
  • Bundled with the source code in an .s2i/bin directory of the application source repository.
  • Published at an URL that is specified in the build configuration definition.

Both the "io.openshift.s2i.scripts-url" label value specified in the image and the script specified in the build configuration can take one of the following forms:

  • image:///<path-to-script-directory> - the absolute path inside the image.
  • file:///<path-to-script-directory> - relative or absolute path to a directory on the host where the scripts are located.
  • http(s)://<path-to-script-directory> - URL to a directory where the scripts are located.

The scripts are:

assemble

This is a required script. It builds the application artifacts from source and places them into appropriate directories inside the image. The script's main responsibility is to turn source code into a runnable application. It can also be used to inject configuration into the system.

It should execute the following sequence:

  • Restore build artifacts, if the build is incremental. In this case save-artifacts must be also defined.
  • Place the application source in the build location.
  • Build.
  • Install the artifacts into locations that are appropriate for them to run.

EAP7 builder image example: assemble.

run

This is a required script. It is invoked when the container is instantiated to execute the application.

EAP7 builder image example: run.

save-artifacts

This is an optional script, needed when incremental builds are enabled. It gathers all dependencies that can speed up the build process, from the build image that has just completed the successful build (the Maven .m2 directory, for example). The dependencies are assembled into a tar file and streamed to standard output.

EAP7 builder image example: save-artifacts.

usage

This is an optional script. Informs the user how to properly use the image.

test/run

This is an optional script. Allows to create a simple process to check whether the image is working correctly. For more details see:

https://docs.openshift.com/container-platform/latest/creating_images/s2i_testing.html#creating-images-s2i-testing

Incremental Build

The source build may be configured as an incremental build, which re-uses the previously downloaded dependencies and previously built artifacts, in order to speed up the build. Incremental builds reuse previously downloaded dependencies, previously built artifacts, etc.

strategy
  type: "Source"
  sourceStrategy:
    from:
       ...
    incremental: true

Webhooks

A source build can be configured to be automatically trigged when a new event - most commonly a push - is detected by the source repository. When the repository identifies a push, or any other kind of event it was configured for, it makes a HTTP invocation into an OpenShift URL. For internal repositories that run within the OpenShift cluster, such as Gogs, the URL is https://openshift.default.svc.cluster.local/oapi/v1/namespaces/<project-name>/buildconfigs/<build-configuration-name>/webhooks/<generic-secret-value>/generic. The secret value can be manually configured in the build configuration, as shown here, but it is usually set to a randomly generated value when the build configuration is created. For an external URL, such as GitHub, the URL must be publicly accessible TODO. Once the build is triggered, it proceed as it would otherwise proceed if it was triggered by other means. Note that no build server is required for this mechanism to work, though a build server such as Jenkins can be integrated and configured to drive a pipeline build.

The detailed procedure to configure a webhook trigger is available here:

Configure a Webhook Trigger for a Source Build

Tagging the Build Artifact

Once the build process (the assemble script) completes successfully, the build runtime sets the image's command to the run script, and tags the image with the output name specified in the build configuration.

TODO. How to tag the output image with a dynamic tag that is generated from the information in the source code itself?

MAVEN_MIRROR_URL

TODO 'MAVEN_MIRROR_URL' is an environment variable interpreted by the s2i builder, which use the Maven repository whose URL is specified as a source of artifacts. For more details see:

OpenShift Nexus

Pipeline Build

https://docs.openshift.com/container-platform/latest/architecture/core_concepts/builds_and_image_streams.html#pipeline-build
https://docs.openshift.com/container-platform/latest/install_config/configuring_pipeline_execution.html#install-config-configuring-pipeline-execution
https://docs.openshift.com/container-platform/latest/dev_guide/dev_tutorials/openshift_pipeline.html

The pipeline build strategy allows the developers to defined the build logic externally, as a Jenkins Pipeline. The logic is executed by an OpenShift-integrated Jenkins instance, which uses a series of specialized plug-ins to work with OpenShift. More details about how OpenShift and Jenkins interact are available in the OpenShift CI/CD Concepts page. The specification of the build logic can be embedded directly into an OpenShift build configuration object, as shown below, or it can be specified externally into a Jenkinsfile which is then later automatically integrated with OpenShift. The build can then be started, monitored, and managed by OpenShift in the same way as any other build type, but it can at the same time be managed from the Jenkins UI - these two representations are kept in sync. The pipeline's graphical representation is available both in the integrated Jenkins instance and it OpenShift directly, as a "Pipeline":

OpenShiftPipeline.png

The pipeline build strategy is specified in the build configuration as follows:

kind: BuildConfig
spec: 
  strategy:
     type: JenkinsPipeline
     jenkinsPipelineStrategy:
       jenkinsfile: |-
            node('mvn') {

                //
                // Groovy script that defines the pipeline 
                //
            }
  source:
    type: none
  output: {}

This is an example of specifying the build logic inline in the build configuration. It is also possible to specify the build logic externally in a Jenkinsfile. Note that unlike in the source build's case, no "output" are specified, because this is fully defined in the Jenkins Groovy script. "source" may be used to specify a source repository URL that contains the Jenkinsfile pipeline definition.

An example of how to create a pipeline build configuration from scratch is available here:

Create a Pipeline Build

CI/CD

OpenShift enables DevOps. It has built-in support for the Jenkins CI server, providing a native way of doing Continuous Integration and Continuous Delivery.

OpenShift CI/CD Concepts
OpenShift CI/CD Operations

Custom Build

The custom build strategy allows to define a specific builder image responsible for the entire build process, which allows you to customize the build process.

The custom builder image is a plain Docker image embedded with build process logic.

Chained Builds

Two builds may be chained: one produces binaries from a builder image and source, and pushes the artifacts into an image stream; the other pulls the binaries produces by the first build from the image stream and places them into a runtime image, based on a Dockerfile, A separated image is thus created.

Build Configuration

A BuildConfig object is the definition of an entire build. It contains a description of how to combine source code and a base image to produce a new image. The BuildConfig lists the location of the source code, the build triggers, the build strategy, various other build configuration parameters, such as maximum allowed duration, and the specification of the output of the build, which is in most cases a Docker container that gets pushed into the integrated registry. Both oc new-app and oc new-build create BuildConfig objects. The preferred way to create a build configuration is to start with oc new-build and customize the resulted build configuration:

Create a Build Configuration with oc new-build
Build Configuration Definition

Generally, immediately after the build configuration is declared, a build starts, and if it successful, a new image will be pushed into the artifact image stream.

Build Trigger

Builds can be triggered by the following events:

TODO: when does a build start automatically and when doesn't.

github, generic, gitlab, bitbucket

Source code change signaled by a GitHub webhook or a generic webhook. An internal repository repository must be configured to invoke into an URL similar to https://openshift.default.svc.cluster.local/oapi/v1/namespaces/$CICD_PROJECT/buildconfigs/<application-name>/webhooks/${WEBHOOK_SECRET}/generic. An external repository must be configured to invoke either into Jenkins using a URL similar to ? or into the associated build configuration, using an URL similar to ?.

ImageChange

https://docs.openshift.com/container-platform/3.7/dev_guide/deployments/basic_deployment_operations.html#image-change-trigger

Base image change, in case of source-to-image builds.

Build Configuration Change

https://docs.openshift.com/container-platform/3.7/dev_guide/deployments/basic_deployment_operations.html#config-change-trigger

Manual Build

Manual with oc start-build.

Build Source

A build configuration accepts the following type of sources:

Build Secrets

The source element may refer to build secrets:

kind: BuildConfig
...
spec:
  ...
  source:
    ...
    sourceSecret:
      name: some-secret

Build Node Selection

Build can be assigned to specific nodes, by specifying labels in the "nodeSelector" field of the build configuration:

kind: BuildConfig
...
spec:
  nodeSelector:
    env: app

Build Resources

Can be setup in the build configuration:

kind: BuildConfig
spec:
  resources:
    limits:
      cpu: ...
      memory: ...

Build Operations

Build Operations

Image

https://docs.openshift.com/container-platform/latest/architecture/core_concepts/containers_and_images.html#docker-images

An OpenShift image represents a immutable container image in Docker format. Images are not project-scoped, but cluster-scoped: any user from any project can get information about any image in the cluster with oc get images, provided that it has sufficient cluster-level privileges. One role that permits inspecting image information is /cluster-reader . On the other hand, a user that only has project-level privileges cannot inspect images.

Information about a specific image can be obtained with oc get image <image-name>

OpenShift Image Definition

Image Name

An image is identified by a name, which is the SHA256 hash of the image: "sha256:ea573da7c263e68f2d021c63bec218b79699a0b48e58b3724360de9c6900ca46". The name can be local to the current cluster - the images produced as artifacts of the cluster's project do not exist anywhere else, until they are explicitly published - or point to a remote Docker registry. An OpenShift cluster usually comes with its own integrated Docker registry, accessible to all projects, whose main function is to store images produced by the projects as result of build activities.

Image Reference

The image contains a "dockerImageReference" entry that maintains the location the image can be found at:

docker-registry.default.svc:5000/novaordis-dev/tasks@sha256:41976593d219eb2008a533e7f6fbb17e1fc3391065e2d592cc0b05defe5d5562
registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift@sha256:ea573da7c263e68f2d021c63bec218b79699a0b48e58b3724360de9c6900ca46

Image Stream

An image stream is similar to a Docker image repository, in that it contains a group of related Docker images identified by image stream tags.

The difference in referencing an image stream (or a tag within an image stream) instead of referencing a image registry/namespace/repository:tag directly is that actions can be triggered automatically when the underlying image changes. These actions may include functionality to control when image updates are rolled out.

Logically it is analogous to a branch in a source code repository. The image stream presents a single virtual view of related images and allow you to control which images are rolled out to your builds and applications. The stream may contain images from the OpenShift integrated Docker registry, other image streams, other image repositories. OpenShift stores complete metadata about each image, including example command, entry point and environment variables.

Image Stream Definition

Image streams exist as OpenShift objects, describable with oc get is, but they also have a representation in the cluster's integrated docker repository docker-registry.default.svc:5000/, as project-name/image-stream-name. As an example, for a "blue-project" and a "red-is":

kind: ImageStream
metadata:
  ...
  name: red-is
  namespace: blue-project
spec:
  tags:
  - name: latest
    from: 
      kind: DockerImage
      name: registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift

ImageStreamInRegistry.png

Example of a JSON file that can be used to create image streams in OpenShift:

https://github.com/openshift/origin/blob/master/examples/image-streams/image-streams-rhel7.json

Builds and deployments can watch an image stream to receive notifications when new images are added and react by performing a build or a deployment.

Operations:

ImageStream Operations

"openshift" Image Streams

A set of standard image streams come pre-configured in the "openshift" project of an OpenShift installation, but image streams can be created in any project.

Image Stream Lookup Policy

The lookup policy specifies how other resources reference this image within this namespace.

Possible values:

  • local - will change the docker short image reference, such as "mysql" or "php:latest" on objects in this namespace to the image ID whenever they match this image stream, instead of reaching out to a remote registry. The name will be fully qualified to an image ID if found. The gag's referencePolicy is taken into account on the replaced value. It only works within the current namespace.

Image Stream Tag

The default tag is called "latest". A tag may point to an external Docker registry, at other tags in the same image stream, a different image stream, or be controlled to directly point to known images. An image stream tag full name is <image-stream-name>:<tag-name>. For example, the tag "0.11.29" exposed by the "gogs" image stream as shown here:

...
kind: ImageStream
metadata:
  name: gogs
spec:
  tags:
  - name: "0.11.29"
    ...

is referred to by a deployment configuration as:

...
triggers:
- type: ImageChange
   imageChangeParams:
     ...
     from:
       kind: ImageStreamTag
       name: gogs:0.11.29

Images can be pushed to an image stream tag directly via the integrated Docker registry.

The image stream tag has a referencePolicy, which defines how other components should consume this image. The reference policy's type determines how the image pull spec should be transformed when the image stream tag is used in deployment configuration triggers or new builds. The default value is "Source", indicating the original location of the image should be used (if imported). The user may also specify "Local", indicating that the pull spec should point to the integrated Docker registry and leverage the registry's ability to proxy the pull to an upstream registry. "Local" allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable.

Image Pull Policy

When the container is created, the runtime uses the "imagePullPolicy" to determine whether to pull the image prior to starting the container. More details available here:

Pod Definition - imagePullPolicy

To Process

Deployment

https://docs.openshift.com/container-platform/latest/architecture/core_concepts/deployments.html
https://docs.openshift.com/container-platform/latest/dev_guide/deployments/how_deployments_work.html

A deployment is a replication controller based on a user-defined template called a deployment configuration: a successful deployment results in a new replication controller being created.

A deployment adds extended support for software development and deployment lifecycle. Deployments are created manually or in response to triggered events, and the most common events that trigger a deployment are either an image change or a configuration change.

The deployment system provides a deployment configuration, which is a template for deployments, triggers that drive automated deployments in response to events, user-customizable strategies to transition from the previous deployment to the new deployment, rollback procedure, and manual replication scaling. Deployment configuration version increments each time new deployment is created from configuration. Deployments allow defining hooks to be run before and after the replication controller is created.

Deployments allow rollbacks.

Deployments allow manual replication scaling or autoscaling.

The deployments are triggered with oc deploy.

Deployment Configuration

A deployment configuration is a user-defined template for performing deployments, which result in running applications. The deployment configuration defines the template for a pod. It manages deploying new images or configuration changes whenever those change. A single deployment configuration is usually analogous to a single micro service.

Each deployment is represented as a replication controller. The OpenShift environment creates a replication controller to run the application in response to a deployment configuration. The deployment configuration contains a version number that is incremented each time a replication controller is created from the configuration.

Deployment configurations can support many different deployment patterns, including full restart, customizable rolling updates, and fully custom behaviors, as well as pre- and post-hooks. It supports automatic rolling back to the last successful revision of configuration, in case the current template fails to deploy.

The DeploymentConfig contains:

  • Replication controller definition.
  • Default replica count fo the deployment.
  • Triggers for creating new deployments automatically, in response to events. If no triggers are defined, deployments must be started manually.
  • Strategy for transitioning before deployments.
  • Life-cycle hooks. Every hook has a failure policy (Abort, Retry, Ignore).
DeploymentConfig Definition

The DeploymentConfig for a project can be listed with:

oc get all
oc get dc

Deployment Triggers

The deployment triggers are specified in the deployment configuration, and can be modified from command line:

ConfigChange

Results in a new deployment, and a new replication controller being created whenever changes are detected to replication controller template of deployment configuration. In the presence of a ConfigChange trigger, the first replication controller is automatically created when the deployment configuration itself is created.

ImageChange

The deployment trigger is a change in the image stream. If we do not want the result of a build to be deployed automatically, even if the build pushes a new image in the repository, we simply do not list the "ImageChange" deployment trigger in the deployment configuration.

Replication Controller

A replication controller is one of the pod controller types available in OpenShift. It resides on master, and insures that the specified number of pod replicas defined in the replication controller configuration are running at all times.

Logically, the replication controllers constitute the replication layer.

The definition of a replication controller includes the number of replicas to be maintained, the pod definition for creating the replicated pod, and a selector for identifying managed pods. If pods exit or are deleted, either explicitly or because the node they run on is taken out of service, the replication controller instantiates more pods up to desired number. If there are more pods running than desired, the replication controller deletes as many as necessary. However, it is NOT the replication controller's job to perform autoscaling based on load or traffic. Replication controllers do not exist as physical processes, meaning they do not run in pods, they only exist as entries in etcd, and the master executes the logic.

A replication controller is most commonly used to represent a single deployment of part of an application based on a built image.

There were situations when a failed deployment can be re-started by deleting the replication controller.

The replication controllers of a project can be listed with:

oc get all
oc get rc

Rollout

A rollout is exposed as a replication controller, and the deployment process manages scaling down old replication controllers and scaling up new ones. Implements one of the deployment strategies. The rollout is performed with:

oc rollout

Deployment Configurations and Replication Controllers

A deployment configuration triggers creation of a replication controller.

Deleting a deployment configuration will automatically delete the replication controllers generated by that deployment configuration.

Rollback

Deployments allow rollbacks to previous versions of an application: when one deployment is superseded by another, the previous replication controller is retained, with its number of replicas set to 0. When triggered - the template fails to deploy -, the rollback reverts an application to the last successful deployment. It is done with oc rollback, API or web console.

Deployment Strategy

The deployment configuration defines a deployment strategy. The deployment strategy determines the deployment process and it is defined by the deployment configuration. The deployment strategy uses readiness checks to determine if a new host is ready for use. If the readiness check fails, the deployment configuration retries until it times out. The readiness timeout value is set in deployment configuration.

The deployment strategy is implemented during the rollout process.

Rolling Deployment Strategy

The default deployment strategy, if a deployment strategy is not explicitly specified in the deployment configuration. It performs rolling updates. It supports life-cycle hooks for injecting code into the deployment process.

It consists in the following steps:

  • Execute the "pre" life-cycle hook.
  • Scale up new deployment by one or more pods, based on maxSurge value, waiting until all readiness checks complete.
  • Scale down the old deployment by one or more pods, based on maxUnavailable value.
  • Repeat scaling until the new deployment reaches desired replica count and the old deployment has scaled to zero.
  • Execute any "post" life-cycle hook.

When scaling down, the strategy waits for pods to become ready, so it can decide if can further scaling would affect availability. If scaled up pods never become ready, the deployment times up and results in a deployment failure.

Recreate Deployment Strategy

Recreate strategy is appropriate when the application does not support old versions and new versions running together, or when the application uses ReadWriteOnce volumes that do not support sharing between multiple replicas.

The recreate strategy implies downtime: there is a time interval when on application instance is running.

It consists in the following steps:

  • Execute "pre" life-cycle hook.
  • Scale down previous deployment to zero.
  • Scale up new deployment.
  • Execute "post" life-cycle hook.

During scale up, if the replica count of the deployment is greater than one, the first deployment replica is validated for readiness before fully scaling up the deployment. If this validation fails, the deployment fails.

...
kind: DeploymentConfig
spec:
  strategy:
    activeDeadlineSeconds: 21600
    type: Recreate
    recreateParams:
      pre:
      mid:
      post:
      timeoutSeconds:     

Parameters:

  • activeDeadlineSeconds: the duration in seconds that the deployer pods for this deployment may bee active on a node before the system actively tries to terminate them.

Custom Deployment Strategy

Allows for custom commands.

The optional "command" array overrides the CMD directive specified in the Dockerfile.

The optional "environment" variables are added to the strategy process' execution.

Deployment Operations

Deployment Operations

Environment Variables to Use for Strategy Process

  • OPENSHIFT_DEPLOYMENT_NAME
  • OPENSHIFT_DEPLOYMENT_NAMESPACE

To Process

Region

Zone

High Availability (HA)

Infrastructure HA

See Master HA.

Application HA

OpenShift insures high availability by deploying the same image in multiple containers across multiple hosts and load balancing among them. This technique also provides horizontal scalability for a service packaged into an image.

Installation

There are two installation procedures: RPM and Containerized.

An RPM installation installs all services through package management and configures services to run within the same user space,

A containerized installation installs services using container images and runs separate services in individual containers.

For practical details on installing various OpenShift version, see:

OpenShift Installation

Related:

OpenShift Logging Installation

Installation is performed by Ansible, usually deployed on the environment's support server. Ansible configuration is available under /etc/ansible and the installation logic under /usr/share/ansible.

Metrics

OpenShift Metrics Concepts

Horizontal Pod Autoscaler (HPA)

https://docs.openshift.com/container-platform/latest/dev_guide/pod_autoscaling.html#dev-guide-pod-autoscaling

Logging

OpenShift Logging Concepts

Events

https://docs.openshift.com/container-platform/3.5/dev_guide/events.html

OpenShift events incapsulate information about specific conditions detected in the OpenShift cluster, and allow the OpenShift management facilities to record information about those occurrences in a resource-agnostic manner. They also allow administrators and developers to consume information about system components in a unified way. A list of events can be obtained with:

oc get events [-n <project-name>]

Events for a project can be reviewed by navigating with the web console to the project -> Monitoring -> Right Side.

Events contain:

  • type: Normal, Warning
  • kind: Configuration, Node, Pod, DaemonSet, Container, Health, Image, Image Manager, System.
  • reason
  • source
  • message

Event Reasons

  • FailedScheduling
  • OutOfDisk
  • MatchNodeSelector
  • SuccessfulCreate

Command Line Tools

OpenShift Command Line Tools (CLI)

Admission Control

https://docs.openshift.com/container-platform/latest/architecture/additional_concepts/admission_controllers.html

Admission control plug-ins intercept requests to the master API, after authentication and authorization was enforced. There's a chain of plug-ins (also known as the admission chain), and if a plug-in rejects the request, the request fails. The plug-in may modify the request object, and related resources. The default list of admission control plug-ins is configured in the master-config.yaml's admissionConfig/pluginConfig section.

Customizable admission control plug-ins:

OpenShift and JBoss

OpenShift and JBoss

Jobs

https://docs.openshift.com/container-platform/latest/dev_guide/jobs.html
apiVersion: batch/v1
kind: Job

Cron Jobs

https://docs.openshift.com/container-platform/latest/dev_guide/cron_jobs.html
apiVersion: batch/v1
kind: CronJob

Configuration Data Externalization

Configuration data must never be stored in application source code, but it must externalized on storage.

Environment Variables

Environment Variables

Custom Metadata Annotations

Custom metadata annotations can be specified in the spec.template.metadata.annotations section of the deployment configuration. Those annotations will be injected into the pod metadata, and will be accessible from the pod with the Downward API.

Common Images