OpenShift 3.5 Installation: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
No edit summary
 
(245 intermediate revisions by the same user not shown)
Line 7: Line 7:
* [[OpenShift Installation]]
* [[OpenShift Installation]]
* [[OpenShift_Concepts#Installation|OpenShift Installation Concepts]]
* [[OpenShift_Concepts#Installation|OpenShift Installation Concepts]]
* [[OpenShift hosts|OpenShift Ansible hosts Inventory File]]


=Overview=
=Overview=


There are two installation methods: ''quick install'', which uses a CLI tool available in the "atomic-openshift-utils" package, which, in turn, uses [[Ansible]] in the background, and ''advanced install''. The advanced install assumes familiarity with [[Ansible]]. This document covers advance install.
There are two installation methods: ''quick install'', which uses a CLI tool available in the "atomic-openshift-utils" package, which, in turn, uses [[Ansible]] in the background, and ''advanced install''. The advanced install assumes familiarity with [[Ansible]]. This document covers advanced install.
 
This document also covers a HA (High Availability) installation.
 
=Diagram=
 
{{Internal|OpenShift 3.5 HA Installation Diagram|OpenShift 3.5 HA Installation Diagram}}
 
=Archive=
 
RackStation:/base/NovaOrdis/Archive/2017.06.29-openshift-3.5-HA
noper430:/root/environments/2017.06.29-openshift-3.5-HA


=Prerequisites=
=Prerequisites=


==External DNS Setup==
==Hardware Requirements==


An [[OpenShift_Concepts#External_DNS_Server|external DNS server]] is required.  
{{Internal|noper430 OpenShift 3.5 HA Installation Resources|noper430 OpenShift 3.5 HA Installation Resources}}


Procedure to configure a bind server:
==External DNS Setup==


{{Internal|Bind_Operations_-_Set_Up_DNS_Server|Set up a bind Server}}
An [[OpenShift_Concepts#External_DNS_Server|external DNS server]] is required. Controlling a registrar (such as http://godaddy.com) DNS zone will provide all capabilities needed to set up external DNS address resolution. A valid alternative, though more complex, is to install a dedicated public DNS server. The DNS server must be available to all OpenShift environment nodes, and also to external clients than need to resolve public names such as the master public web console and API URL,  the public wildcard name of the environment, the [[OpenShift_Concepts#Router|application router]] public DNS name, etc. A combination of a public DNS server that resolves public addresses and an internal DNS server, deployed as part of the environment, usually on the support node, tasked with resolving internal addresses is also a workable solution. This installation procedure includes installation of the bind DNS server on the support node - see [[#bind_DNS_Server_Installation|bind DNS server installation]] section below.


The DNS server will need to be capable to support [[DNS_Concepts#Wildcard_Sub-Domain|wildcard sub-domains]]. For the configuration procedure see: {{Internal|Bind_Operations_-_Set_Up_DNS_Server#Declared_a_Wildcard_DNS_Domain|Declare a Wildcard Subdomain}}
===Wildcard Domain for Application Traffic===


==Minimum Hardware Requirements==
The DNS server that resolves the public addresses will need to be capable to support [[DNS_Concepts#Wildcard_Sub-Domain|wildcard sub-domains]] and resolve the public wildcard DNS entry to the public IP address of the [[OpenShift Concepts#Node|node]] that executes the [[OpenShift_Concepts#Router|default router]], or of the public ingress node that proxies to the default router. If the environment has multiple routers, an external load balancer is required, and the wildcard record must contain the public IP address of the host that runs the load balancer. The name of the wildcard domain will be later specified during the advanced installation procedure in the Ansible inventory file as [[OpenShift_hosts#openshift_master_default_subdomain|openshift_master_default_subdomain]].


{{External|https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#hardware}}
Configuration procedures:
 
* [[Bind_Operations_-_Set_Up_DNS_Server#Declare_a_Wildcard_DNS_Domain|Declare a Wildcard Subdomain for bind]]
A full RHEL7.3/Master installation requires 121 MB in /boot and 1.7GB in /.
* [[godaddy.com#Setting_Up_a_Wildcard_DNS|Setting Up a Wildcard DNS at godaddy.com]]


==O/S Requirements and Configuration==
==O/S Requirements and Configuration==
Line 35: Line 47:
{{External|https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html}}
{{External|https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html}}


Install RHEL 7.3 in "minimal" installation mode. A possible way to install it is on top of a VirtualBox virtual machine, as described [[VirtualBox_Virtual_Machine_Creation#Display|here]].
<span id='Basic_OS'></span>
===Common Template Preparation===
<span id='Basic_OS_Template_Preparation'></span>
 
{{Warn|Use the same template for all hosts - support, OpenShift masters, infranodes and nodes - except ingress hosts, which should only come with a minimal installation, ssh and haproxy.}}
 
====Provision the Virtual Machine====
 
Provisioning procedures:
* [[Linux_Virtualization_Operations#Guest_Operations|KVM guests]]
* [[VMware Fusion Virtual Machine Provisioning|VMware Fusion Virtual Machine Provisioning]]
* [[VirtualBox_Virtual_Machine_Creation#Display|VirtualBox VMs]]


OpenShift requires [[NetworkManager]] on all nodes (see https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-networkmanager).
====Perform a "minimal" RHEL installation====
Using NetworkManager, assign a static IP address to the interface to be used by the OpenShift cluster, as described here: [[NetworkManager_Operations#.E2.81.A0Adding_a_Static_Ethernet_Connection|adding a Static Ethernet Connection with NetworkManager]].


[[Sshd_Configuration#Turn_Off_Client_Name_DNS_Verification|Turn off sshd client name DNS verification]].
Install [[RHEL_7/Centos_7_Installation#Overview|RHEL 7.3 in "minimal" installation mode]]. See the [[#NetworkManager|NetworkManager]] section below.


Attach the node to the subscription, using subscription manager, as described here: [[Red_Hat_Subscription_Manager#Register_a_Linux_System|registering a RHEL System with subscription manager]] then follow the specific subscription steps required by OpenShift: https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#host-registration
====NetworkManager====


Sequence of steps - the goal of these steps is to configure the following supported repositories on the system: "rhel-7-server-rpms", "rhel-7-server-extras-rpms", "rhel-7-server-ose-3.5-rpms", "rhel-7-fast-datapath-rpms":
OpenShift requires [[NetworkManager]] on all nodes (see https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-networkmanager).  Make sure it works:
 
<pre>
nmcli g
</pre>
 
====Configure a Static IP Address====
 
Assign a static IP address to the interface to be used by the OpenShift cluster, as described here: [[NetworkManager_Operations#.E2.81.A0Adding_a_Static_Ethernet_Connection|Adding a Static Ethernet Connection with NetworkManager]].
 
Editing /etc/sysconfig/network-scripts/ifcfg* also works: [[Linux 7 Configuring a Network Interface]].
 
Use a neutral address. The static IP address will be changed when the template is cloned into actual virtual machine instances.
 
====Set Up DNS Resolver====
 
{{Error|Return Here.}}
 
Configure NetworkManager to relinquish management of /etc/resolv.conf: [[NetworkManager_Operations#Specifying_a_DNS_Server|Network Manager and DNS Server]].
 
Stop NetworkManager (otherwise it'll overwrite the file we're modifying):
 
  systemctl stop NetworkManager
 
Then, configure the DNS resolver to the environment DNS serve in /etc/resolv.conf:
 
ocp36.local
<pre>
search openshift35.local
domain openshift35.local
nameserver <internal-ip-address-of-the-support-node-that-will-run-named>
nameserver 8.8.8.8
</pre>
 
{{Warn|Note that the internal support node address should be temporarily commented out until the DNS server is fully configured and started on that node, otherwise all network operations will be extremely slow.}}
 
Also, [[Sshd_Configuration#Turn_Off_Client_Name_DNS_Verification|turn off sshd client name DNS verification]].
 
====Subscription====
 
Attach the node to the subscription, using subscription manager, as described here: [[Red_Hat_Subscription_Manager#Register_a_Linux_System|registering a RHEL System with subscription manager]]. The support node(s) need only a Red Hat Enterprise Linux. The OpenShift nodes need an OpenShift subscription. For OpenShift, follow these steps: https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#host-registration.
 
The the summary of the sequence of steps is available below - the goal of these steps is to configure the following supported repositories on the system: "rhel-7-server-rpms", "rhel-7-server-extras-rpms", "rhel-7-server-ose-3.5-rpms", "rhel-7-fast-datapath-rpms":


<pre>
<pre>
Line 59: Line 124:
</pre>
</pre>


Install base packages (https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#installing-base-packages):
* Install base packages (https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#installing-base-packages):


<pre>
<pre>
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec sos psacct zip unzip
yum update -y
yum update -y
yum install atomic-openshift-utils
yum install atomic-openshift-utils
</pre>
</pre>
====Excluders====


Prevent accidental upgrades of OpenShiift and Docker, by installing "excluder" packages. The *-excluder packages add entries to the "exclude" directive in the host’s /etc/yum.conf file when installed. Those entries can be removed later when we explicitly want to upgrade OpenShift or Docker. More details in [[Yum#Exclusion|yum Exclusion]].
Prevent accidental upgrades of OpenShiift and Docker, by installing "excluder" packages. The *-excluder packages add entries to the "exclude" directive in the host’s /etc/yum.conf file when installed. Those entries can be removed later when we explicitly want to upgrade OpenShift or Docker. More details in [[Yum#Exclusion|yum Exclusion]].
Line 78: Line 145:
atomic-openshift-excluder unexclude
atomic-openshift-excluder unexclude
</pre>
</pre>
{{Warn|At this point, no OpenShift binaries, except utilities, are installed. The advanced installer knows how to override this and it will install the binaries as expected, without any further intervention.}}
====Reboot====


Reboot the system to make sure it starts correctly after package installation:
Reboot the system to make sure it starts correctly after package installation:
Line 85: Line 156:
</pre>
</pre>


Configure the DNS client to use the DNS server that was installed as part of the procedure. See [[/etc/resolv.conf#Manual_resolv.conf_Configuration|Manual /etc/resolv.conf Configuration]] and https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-dns
====Configure the Installation User====
<span id='Installation_User'></span>


Make sure SELinux [[SELinux_Operations#How_to_Find_Out_Whether_SELinux_is_Enabled|is enabled]] on all hosts. If is not, enable SELinux and make sure <tt>SELINUXTYPE</tt> is "[[SELinux_Concepts#type_targeted|targeted]]" in /etc/selinux/config.
Create an installation user that can log in remotely from support.openshift35.local, which is the host that will drive the Ansible installation. Conventionally we name that user "ansible" and it must be able to passwordlessly ssh into itself and all other environment nodes, and also passwordless sudo. The user will be part of the template image.
 
  groupadd -g 2200 ansible
  useradd -m -g ansible -u 2200 ansible
 
====Install the Public Installation Key====
 
OpenShift installer requires passwordless public key-authenticated access.
 
Generate a public/private key pair [[Ssh_Configure_Public/Private_Key_Authentication#Create_the_OpenSSH_Private.2FPublic_Key_Pair|as described here]] and install the public key into template's "ansible" .ssh/authorized_keys. The private key should be set aside and later installed onto the ansible installation host.
 
Also, [[Sudo#Allow_a_user_to_run_all_commands_as_root_without_a_password|allow passwordless sudo]] for the 'ansible' user.
 
====Firewall Configuration====
 
[[RHEL_7/Centos_7_Installation#Turn_off_firewalld_and_configure_the_iptables_service|Turn off firewalld and configure the iptables service]].
 
<pre>
systemctl stop firewalld
systemctl disable firewalld
systemctl is-enabled firewalld
yum remove firewalld
</pre>
 
OpenShift needs iptables running:


<pre>
<pre>
sestatus
systemctl enable iptables
systemctl start iptables
</pre>
</pre>
====NFS Client====
Install [[Linux_NFS_Installation#Install_Packages_2|NFS client dependencies]].
====SELinux====
Make sure SELinux [[SELinux_Operations#How_to_Find_Out_Whether_SELinux_is_Enabled|is enabled]] on all hosts. If is not, enable SELinux and make sure <tt>SELINUXTYPE</tt> is "[[SELinux_Concepts#type_targeted|targeted]]" in /etc/selinux/config.
  sestatus


====Docker Installation====
====Docker Installation====
Line 97: Line 204:
{{External|https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#installing-docker}}
{{External|https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#installing-docker}}


Install Docker. Docker is technically not required on masters, but it is easier to create a uniform image and only disable docker on masters. The binaries must be installed from the rhel-7-server-ose-3.*-rpms repository and have it running before installing OpenShift.  
Install Docker on the template. On a small number of guests, such as the proxies and the support host, it will simply not be activated. Docker is also technically not required on masters, it seems the installation will break if not available (more to comment here later). The binaries must be installed from the rhel-7-server-ose-3.*-rpms repository and have it running before installing OpenShift.  


OpenShift 3.5 requires Docker 1.12.
OpenShift 3.5 requires Docker 1.12.
Line 106: Line 213:
</pre>
</pre>


The advanced installation procedure will update [[/etc/sysconfig/docker]] on nodes with OpenShift-specific configuration.
The advanced installation procedure is supposed to update [[Docker_Server_Configuration#RedHat.2FCentos|/etc/sysconfig/docker]] on nodes with OpenShift-specific configuration. The documentation says that the advanced installation procedure will add an "--insecure-registry" option, but that does not seem to be the case, so we add it manually in /etc/sysconfig/docker:


Provision [[OpenShift_Concepts#Docker_Storage_in_OpenShift|storage for the Docker server]]. The [[Docker_Concepts#Loopback_Storage|default loopback storage]] is not appropriate for production, it should be replaced by a [[Linux_Logical_Volume_Management_Concepts#Thinly-Provisioned_Logical_Volumes_.28Thin_Volumes.29|thin-pool logical volume]]. Follow https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#configuring-docker-storage. Used Option A) "an additional block device". On VirtualBox, [[VirtualBox_Operations#Creating_and_Installing_a_new_Virtual_Disk|provision a new virtual disk]] of appropriate size and configure it as Docker storage backend.
<pre>
INSECURE_REGISTRY='--insecure-registry 172.30.0.0/16'
</pre>


The procedure consists in executing /usr/bin/docker-storage-setup with the base configuration read from  [[/usr/lib/docker-storage-setup/docker-storage-setup]] and custom configuration specified in /etc/sysconfig/docker-storage-setup, similarly to:
The subnet value used to configure the insecure registry corresponds to the default value of the [[OpenShift_Concepts#The_Services_Subnet|services subnet]].


<pre>
Provision [[OpenShift_Concepts#Docker_Storage_in_OpenShift|storage for the Docker server]]. The [[Docker_Concepts#Loopback_Storage|default loopback storage]] is not appropriate for production, it should be replaced by a [[Linux_Logical_Volume_Management_Concepts#Thinly-Provisioned_Logical_Volumes_.28Thin_Volumes.29|thin-pool logical volume]]. Follow https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#configuring-docker-storage. Used Option A) "an additional block device". On VirtualBox or KVM, provision a new virtual disk and install it. At this stage, the size it is not important, as it will replaced with the actual storage when the nodes are built. Use 100 MB for the template.
STORAGE_DRIVER=devicemapper
* [[VirtualBox_Operations#Creating_and_Installing_a_new_Virtual_Disk|Creating and installing a new virtual disk on VirtualBox]]
DEVS=/dev/sdb
* [[Virsh_vol-create-as|Creating a new logical volume]] on KVM, followed by attachment to the template. When creating the logical volume, name it "template-docker.storage", following the [[Linux_Virtualization_Naming_Conventions#Storage_Volume_Naming_Convention|storage volume naming conventions]].
VG=docker_vg
 
DATA_SIZE=500M
KVM example (the template VM must be shut down prior to attaching the storage):
MIN_DATA_SIZE=1M
 
</pre>
  virsh vol-create-as --pool main-storage-pool --name template-docker.raw --capacity 1024M
  virsh vol-list --pool main-storage-pool
  virsh attach-disk template /main-storage-pool/template-docker.raw vdb --config
 
Restart the template VM, the new storage should be available as /dev/vdb.
 
Then execute /usr/bin/docker-storage-setup with the base configuration read from  [[/usr/lib/docker-storage-setup/docker-storage-setup]] and custom configuration specified in /etc/sysconfig/docker-storage-setup, similarly to:
 
STORAGE_DRIVER=devicemapper
DEVS=/dev/vdb
VG=docker_vg
# set to a little bit less than maximum amount of space available
DATA_SIZE=<b>1023M</b>
MIN_DATA_SIZE=1M
 
{{Warn|Setting DATA_SIZE too small caused nodes not being able to start and OpenShift [[OpenShift Concepts#OutOfDisk|OutOfDisk events]].}}
 
Execute:
 
  /usr/bin/docker-storage-setup


Under some circumstances, /usr/bin/docker-storage-setup fails with:
Under some circumstances, /usr/bin/docker-storage-setup fails with:
Line 129: Line 257:
</pre>
</pre>


<span id='Ny62gV'></span>If that happens, follow the manual procedure of provisioning Docker storage on a dedicated block device:
If this happens, use the patched docker-storage-setup available here: https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/patches/node/usr/bin/docker-storage-setup
 
Before running it, remove any logical volume, volume group and physical volume that may have been created and leave an empty /dev/vdb1 partition. Then run


{{Internal|Provision Docker Storage on a Dedicated Block Device|Provision Docker Storage on a Dedicated Block Device}}
  /usr/bin/docker-storage-setup --force


After the script completes successfully, it creates a logical volume with an XFS filesystem mounted on docker root directory /var/lib/docker and the Docker storage configuration file /etc/sysconfig/docker-storage. The thin pool to be used by Docker should be visible in [[lvs]]:
After the script completes successfully, it creates a logical volume with an XFS filesystem mounted on docker root directory /var/lib/docker and the Docker storage configuration file /etc/sysconfig/docker-storage. The thin pool to be used by Docker should be visible in [[lvs]]:
Line 142: Line 272:
   root        main_vg  -wi-ao----  7.00g
   root        main_vg  -wi-ao----  7.00g
</pre>
</pre>
<span id='Ny62gV'></span>Alternatively, you can follow the manual procedure of provisioning Docker storage on a dedicated block device:
{{Internal|Provision Docker Storage on a Dedicated Block Device|Provision Docker Storage on a Dedicated Block Device}}


Disable docker-storage-setup, is not needed, storage already setup.
Disable docker-storage-setup, is not needed, storage already setup.


<pre>
  systemctl disable docker-storage-setup
systemctl disable docker-storage-setup
  systemctl is-enabled docker-storage-setup
</pre>


Enable Docker at boot and start it.
Enable Docker at boot and start it.


<pre>
  systemctl enable docker
systemctl enable docker
  systemctl start docker
systemctl start docker
 
</pre>
  systemctl status docker


Reboot the system and then check [[Docker Server Runtime]].
Reboot the system and then check [[Docker Server Runtime]].


Generic Docker installation instructions [[Docker_Installation#Prerequisites|Docker Installation]].
<font color=red>TODO: parse and NOKB this: https://docs.openshift.com/container-platform/3.5/scaling_performance/optimizing_storage.html#optimizing-storage</font>
 
Generic Docker installation instructions [[Docker_Linux_Installation#Prerequisites|Docker Linux Installation]].
 
====Miscellaneous====
 
* Optionally [[OpenShift Core Usage Configuration#Overview|configure the number of cores used by the master and node OpenShift processes]].


====Installation User====
====Cloud-Provider Specific Configuration====


Create an installation user that can log in remotely from support.openshift35.local, which is the host that will drive installation. Conventionally we name that user "ansible" and it must be able to passwordlessly ssh into itself and all other environment nodes, and do passwordless sudo. The user will be part of the shared image.
* https://docs.openshift.com/container-platform/3.5/install_config/configuring_aws.html#install-config-configuring-aws
* https://docs.openshift.com/container-platform/3.5/install_config/configuring_openstack.html#install-config-configuring-openstack
* https://docs.openshift.com/container-platform/3.5/install_config/configuring_gce.html#install-config-configuring-gce


<pre>
===Access/Proxy Host Configuration===
groupadd -g 1200 ansible
useradd -m -g ansible -u 1200 ansible
</pre>


Configure [[Ssh Configure Public/Private Key Authentication|public/private key authentication]] and install the public key into its own authorized_keys file.
====Clone the Template====


[[Sudo#Allow_a_user_to_run_all_commands_as_root_without_a_password|Allow passwordless sudo]] to "ansible".
Clone the template following the procedure described here [[Linux_Virtualization_Cloning_a_KVM_Guest_Virtual_Machine|Cloning a KVM Guest]]. While cloning, consider the following:
* Adjust the memory and the number of virtual CPUs.
* The access host needs two interfaces, one external and one internal. For details see [[Linux_Virtualization_Cloning_a_KVM_Guest_Virtual_Machine#Network|Cloning a KVM Guest - Networking]] and [[Attaching_a_Guest_Directly_to_a_Virtualization_Host_Network_Interface_with_a_macvtap_Driver#Overview|Attaching a Guest Directly to a Virtualization Host Network Interface with a macvtap Driver]].
.
* The access host needs only one storage device.


====Firewall Configuration====
====Post-Cloning====


[[RHEL_7/Centos_7_Installation#Turn_off_firewalld_and_configure_the_iptables_service|Turn off firewalld and configure the iptables service]].
* Turn off external root access.
* Add external users.
* Disable and remove Docker.


<pre>
{{Warn|There is no need to install HAProxy manually to serve as master node load balancer, the OpenShift installation procedure will do it.}}
systemctl stop firewalld
systemctl disable firewalld
systemctl is-enabled firewalld
</pre>


OpenShift needs iptables running:
* Add the IP addresses for masters, support and lb (itself) to /etc/hosts, the DNS server may not be operational when we need it.


<pre>
192.168.122.10 in in.local lb lb.local
systemctl enable iptables
192.168.122.12 support support.local
systemctl start iptables
192.168.122.13 master1 master1.local
</pre>
192.168.122.14 master2 master2.local
192.168.122.16 master3 master3.local


====NFS Client====
===Support Host Configuration===


Install [[Linux_NFS_Installation#Install_Packages_2|NFS client dependencies]].
====Clone the Template====


====Miscellaneous====
Clone the template following the procedure described here [[Linux_Virtualization_Cloning_a_KVM_Guest_Virtual_Machine|Cloning a KVM Guest]]. While cloning, consider the following:
* Adjust the memory and the number of virtual CPUs.
* The support host needs an additional block storage device, but it won't be used for Docker storage and it will be different in size.


* Optionally [[OpenShift Core Usage Configuration#Overview|configure the number of cores used by the master and node OpenShift processes]].
====Post-Cloning====


====Cloud-Provider Specific Configuration====
* Turn off docker and disable it
* Remove the docker logical volume
* Replace the docker storage with nfs storage - it should still be /dev/vdb1


* https://docs.openshift.com/container-platform/3.5/install_config/configuring_aws.html#install-config-configuring-aws
====NFS Server Installation====
* https://docs.openshift.com/container-platform/3.5/install_config/configuring_openstack.html#install-config-configuring-openstack
* https://docs.openshift.com/container-platform/3.5/install_config/configuring_gce.html#install-config-configuring-gce


==Special Configuration==
Install and configure an NFS server. The NFS server will serve storage for [[OpenShift_Concepts#Persistent_Volume|persistent volumes]], [[OpenShift Concepts#Metrics|metrics]] and [[OpenShift Concepts#Logging|logging]]. Build a dedicated storage device to be shared via NFS, but do not export individual volumes; the OpenShift installer will do that automatically. At this stage, make sure the storage is mounted,  NFS server binaries are installed, and iptables is correctly configured to allow NFS to serve storage. A summary of these steps is presented in [[Provisioning and Installing a New Filesystem on a New Logical Volume]] and [[Linux NFS Installation|NFS Server Installation]]. All steps of the procedure must be executed, less the export part.


===NFS Server===
====bind DNS Server Installation====
<span id='bind_DNS_Server'></span>


The NFS server will serve storage for [[OpenShift_Concepts#Volume|persistent volumes]], [[OpenShift Concepts#Metrics|metrics]] and [[OpenShift Concepts#Logging|logging]].  
{{Error|During the next iteration, consider installing the DNS server on "in", as "in" is always on, and the load balancer also runs on "in" and it needs internal name resolution. Also make haproxy dependent on named.}}


Build a dedicated storage device to be shared via NFS, but do not export individual volumes; the OpenShift installer will do that automatically. At this stage, make sure the storage is mounted,  NFS server binaries are installed, and iptables is correctly configured to allow NFS to serve storage. A summary of these steps is presented in [[Provisioning and Installing a New Filesystem on a New Logical Volume]] and [[Linux NFS Installation|NFS Server Installation]]. All steps of the procedure must be executed, less the export part.
{{Error|Consider using master SkyDNS as the only name forwarder to the support DNS server.}}


Disable Docker.
{{Internal|Bind_Operations_-_Set_Up_DNS_Server|bind DNS Server Installation}}


Remove the docker_vg volume group and remove the dedicated Docker storage.
===OpenShift Node Configuration===


=Common Image Post-Processing=
Clone the template following the procedure described here [[Linux_Virtualization_Cloning_a_KVM_Guest_Virtual_Machine|Cloning a KVM Guest]]. While cloning, consider the following:


* Adjust memory and CPU.
Adjust the memory and the number of virtual CPUs.
* [[Reconfigure Linux VM Guest Image]].
* Tests:


<pre>
Configure the DNS client to use the DNS server that was installed as part of the procedure. See "[[/etc/resolv.conf#Manual_resolv.conf_Configuration|manual /etc/resolv.conf Configuration]]" and https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-dns
docker version
</pre>
<pre>
lvs
</pre>


From support.openshift35.local, try:
Configure docker disk (note that the sequence described blow needs a modified /usr/bin/docker-storage-setup):


<pre>
systemctl stop docker
ssh -i ~ansible/.ssh/id_rsa ansible@master2
unalias rm
</pre>
rm -r /var/lib/docker
cat /dev/null > /etc/sysconfig/docker-storage
pvs
fdisk /dev/vdb
/usr/bin/docker-storage-setup --force
systemctl is-enabled docker
systemctl start docker
systemctl status docker


=OpenShift Advanced Installation=
=OpenShift Advanced Installation=
Line 243: Line 387:


Execute the installation as "ansible" from the support node.  
Execute the installation as "ansible" from the support node.  
As "root" on "support":


<pre>
<pre>
Line 254: Line 400:
==Configure Ansible Inventory File==
==Configure Ansible Inventory File==


The default Ansible inventory file is /etc/ansible/hosts. It is used by the [[Ansible Concepts#Playbook|Ansible playbook]] to install the OpenShift environment. The inventory file describes the configuration and the topology of the [[OpenShift_Concepts#OpenShift_Cluster|OpenShift cluster]]. Start from a template like https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts and customize it to match the environment.
The default Ansible inventory file is /etc/ansible/hosts. It is used by the [[Ansible Concepts#Playbook|Ansible playbook]] to install the OpenShift environment. The inventory file describes the configuration and the topology of the [[OpenShift_Concepts#OpenShift_Cluster|OpenShift cluster]]. Start from a template like https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts and customize it to match the environment. Comments and additional documentation on some of the installation options are available here:
 
{{Internal|OpenShift hosts|OpenShift Ansible hosts Inventory File}}


If the target nodes have multiple network interfaces, and the network interface used to cluster OpenShift is NOT associated with the default route, modify the inventory file as follows:
If the target nodes have multiple network interfaces, and the network interface used to cluster OpenShift is NOT associated with the default route, modify the inventory file as follows:
Line 266: Line 414:
master1.openshift35.local openshift_ip=172.23.0.4 ...
master1.openshift35.local openshift_ip=172.23.0.4 ...
</pre>
</pre>
==Patch Ansible Logic==
===External DNS Server Support===
OpenShift 3.5 installer did not handle 'openshift_dns_ip' properly, dnsmasq/NewrokManager runtime ignored it. In order to fix it, has to use the following:
* [https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/patches/support.node/usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/tasks/main.yml /usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/tasks/main.yml]
* [https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/patches/support.node/usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/files/networkmanager/99-origin-dns.sh /usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/files/networkmanager/99-origin-dns.sh]


==Pre-Flight==
==Pre-Flight==


On the support node, as "ansible":
On the support node:


<pre>
As "root":
ansible all -m ping
 
ansible nodes -m shell -a "docker version"
cd /tmp
</pre>
rm -r tmp* yum*
rm /usr/share/ansible/openshift-ansible/playbooks/byo/config.retry
 
As "ansible":
 
ansible all -m ping
ansible nodes -m shell -a "systemctl status docker"
ansible nodes -m shell -a "docker version"
ansible nodes -m shell -a "pvs"
ansible nodes -m shell -a "vgs"
ansible nodes -m shell -a "lvs"
ansible nodes -m shell -a "nslookup something.apps.openshift35.external"
 
ansible nodes -m shell -a "mkdir /mnt/tmp"
ansible nodes -m shell -a "mount -t nfs support.local:/support-nfs-storage /mnt/tmp"
ansible nodes -m shell -a "hostname > /mnt/tmp/\$(hostname).txt"
ansible nodes -m shell -a "umount /mnt/tmp"


==Running the Advanced Installation==
==Running the Advanced Installation==
Line 294: Line 466:
<pre>
<pre>
PLAY RECAP *********************************************************************
PLAY RECAP *********************************************************************
infranode1.rdu20.internal  : ok=217 changed=41   unreachable=0    failed=0
infranode1.openshift35.local : ok=222 changed=60   unreachable=0    failed=0
infranode2.rdu20.internal  : ok=217 changed=41   unreachable=0    failed=0
infranode2.openshift35.local : ok=222 changed=60   unreachable=0    failed=0
loadbalancer1.rdu20.internal : ok=73   changed=10   unreachable=0    failed=0
lb.openshift35.local      : ok=75   changed=14   unreachable=0    failed=0
localhost                  : ok=12  changed=0    unreachable=0    failed=0
localhost                  : ok=12  changed=0    unreachable=0    failed=0
master1.rdu20.internal    : ok=1035 changed=267 unreachable=0    failed=0
master1.openshift35.local  : ok=1033 changed=274 unreachable=0    failed=0
master2.rdu20.internal    : ok=440 changed=112 unreachable=0    failed=0
master2.openshift35.local  : ok=445 changed=132 unreachable=0    failed=0
master3.rdu20.internal    : ok=440 changed=112 unreachable=0    failed=0
master3.openshift35.local  : ok=445 changed=132 unreachable=0    failed=0
node1.rdu20.internal      : ok=217 changed=41   unreachable=0    failed=0
node1.openshift35.local    : ok=222 changed=60   unreachable=0    failed=0
node2.rdu20.internal      : ok=217 changed=41   unreachable=0    failed=0
node2.openshift35.local    : ok=222 changed=60   unreachable=0    failed=0
support3.rdu20.internal    : ok=77  changed=11  unreachable=0    failed=0
support.openshift35.local  : ok=77  changed=5    unreachable=0    failed=0
</pre>
</pre>


Line 312: Line 484:
==Uninstallation==
==Uninstallation==


In case the installation procedure ran into problems, troubleshoot and before re-starting the installation procedure, uninstall:
In case the installation procedure runs into problems, troubleshoot and before re-starting the installation procedure, uninstall:


<pre>
<pre>
Line 318: Line 490:
</pre>
</pre>


=OpenShift Environment Post-Install=
This a relative quick method to iterate over the installation configuration and come up with a stable configuration. However, in order to install a production deployment, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, run the following on all hosts:
 
<pre>
# yum -y remove openshift openshift-* etcd docker docker-common
# rm -rf /etc/origin /var/lib/openshift /etc/etcd \
    /var/lib/etcd /etc/sysconfig/atomic-openshift* /etc/sysconfig/docker* \
    /root/.kube/config /etc/ansible/facts.d /usr/share/openshift
</pre>


<font color=red>
=Post-Install=
* Persistent storage support preparation: https://docs.openshift.com/container-platform/3.5/install_config/persistent_storage/persistent_storage_nfs.html#install-config-persistent-storage-persistent-storage-nfs


* Deploy the Integrated Docker Registry
Perform, in this order:


* Deploy the HAProxy Router
==Adjust Management API/Console HAProxy Load Balancer Console==
<span id='HAProxy_Load_Balancer_Adjustments'></span>


* Load Image Streams
Configure the load balancer HAProxy on the "lb" node to log in/var/log/haproxy.log. The procedure is described here: {{Internal|HAProxy_Configuration#Logging_Configuration|HAProxy Logging Configuration}}


* Load Templates
==Deploy an Application Traffic Proxy==
<span id='Deploy_a_HAProxy_Router'></span>


'''TODO'''
In the case there is more than one [[OpenShift_Concepts#Router|router]] pod, or the infrastructure node the router pod has been deployed does not expose a public IP address, public application traffic, directed to the [[OpenShift_3.5_Installation#Wildcard_Domain_for_Application_Traffic|wildcard domain configured on the external DNS]] must be handled by a proxy that load balances between the router pods.


DNS Configuration
An extra HAProxy instance can do that. It can be installed and configured as follows:
* installation: [[HAProxy_Installation|HAProxy Installation]].
* configuration: [[HAProxy_Configuration|HAProxy Configuration]] (including logging)
* allow iptables access to 443.


After setup, the DNS server needs to be configured to resolve a public wildcard DNS entry to the public IP address of the [[OpenShift Concepts#Node|node]] that executes the [[OpenShift_Concepts#Router|default router]], by adding an [[DNS_Concepts#A_.28Host.29|A record]], with a low TTL. If the environment has multiple routers, an external load balancer is required.
==Post-Install Logging Configuration==


<pre>
<font color=red>'''TODO''' this in the light of the fluentd DaemonSet reconfiguration:
*.myapp.example.com. 300 IN  A 1.2.3.4
</pre>


Also see https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#wildcard-dns-prereq
In order for the logging pods to spread evenly across the cluster, an empty [[OpenShift Concepts#Node_Selector|node selector]] should be used. The "logging" project should have been created already. The empty [[OpenShift_Concepts#Per-Project_Node_Selector|per-project node selector]] can be updates as follows:


* Verify that /etc/sysconfig/docker on all nodes contains --selinux-enabled and --insecure-registry 172.30.0.0/16
oc edit namespace logging


* https://docs.docker.com/engine/admin/logging/overview/#/options
...
metadata:
  annotations:
    <b>openshift.io/node-selector: ""</b>
...


* Most of the ports will be configured in iptables by the installer, automatically - verify that. [[OpenShift Ports|OpenShift Ports]].
</font>
</font>


=Optional Post-Install Tasks=
==Provision Persistent Storage==
 
{{External|https://docs.openshift.com/container-platform/3.5/install_config/persistent_storage/persistent_storage_nfs.html#install-config-persistent-storage-persistent-storage-nfs}}
 
<font color=red>TODO</font>
 
==Deploy the Integrated Docker Registry==
 
<font color=red>TODO: isn't this automatically deployed? See "Internal Registry Configuration" section of the inventory file. How do I check?</font>
 
==Load Image Streams==
 
<font color=red>TODO</font>
 
==Load Templates==
 
<font color=red>TODO</font>
 
=Diagnostics=
 
From a master node, as root, execute:
 
[[oadm diagnostics]]
 
=Optional Post-Install=


==Stand-alone Registry==
==Stand-alone Registry==
{{External|https://docs.openshift.com/container-platform/3.5/install_config/install/stand_alone_registry.html#registry-installation-methods}}


{{Internal|OpenShift_Concepts#Stand-alone_Registry|Stand-alone Registry - Concepts}}
{{Internal|OpenShift_Concepts#Stand-alone_Registry|Stand-alone Registry - Concepts}}
{{External|https://docs.openshift.com/container-platform/3.5/install_config/install/stand_alone_registry.html#registry-installation-methods}}

Latest revision as of 20:15, 29 April 2018

External

Internal

Overview

There are two installation methods: quick install, which uses a CLI tool available in the "atomic-openshift-utils" package, which, in turn, uses Ansible in the background, and advanced install. The advanced install assumes familiarity with Ansible. This document covers advanced install.

This document also covers a HA (High Availability) installation.

Diagram

OpenShift 3.5 HA Installation Diagram

Archive

RackStation:/base/NovaOrdis/Archive/2017.06.29-openshift-3.5-HA
noper430:/root/environments/2017.06.29-openshift-3.5-HA

Prerequisites

Hardware Requirements

noper430 OpenShift 3.5 HA Installation Resources

External DNS Setup

An external DNS server is required. Controlling a registrar (such as http://godaddy.com) DNS zone will provide all capabilities needed to set up external DNS address resolution. A valid alternative, though more complex, is to install a dedicated public DNS server. The DNS server must be available to all OpenShift environment nodes, and also to external clients than need to resolve public names such as the master public web console and API URL, the public wildcard name of the environment, the application router public DNS name, etc. A combination of a public DNS server that resolves public addresses and an internal DNS server, deployed as part of the environment, usually on the support node, tasked with resolving internal addresses is also a workable solution. This installation procedure includes installation of the bind DNS server on the support node - see bind DNS server installation section below.

Wildcard Domain for Application Traffic

The DNS server that resolves the public addresses will need to be capable to support wildcard sub-domains and resolve the public wildcard DNS entry to the public IP address of the node that executes the default router, or of the public ingress node that proxies to the default router. If the environment has multiple routers, an external load balancer is required, and the wildcard record must contain the public IP address of the host that runs the load balancer. The name of the wildcard domain will be later specified during the advanced installation procedure in the Ansible inventory file as openshift_master_default_subdomain.

Configuration procedures:

O/S Requirements and Configuration

https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html
https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html

Common Template Preparation


Use the same template for all hosts - support, OpenShift masters, infranodes and nodes - except ingress hosts, which should only come with a minimal installation, ssh and haproxy.

Provision the Virtual Machine

Provisioning procedures:

Perform a "minimal" RHEL installation

Install RHEL 7.3 in "minimal" installation mode. See the NetworkManager section below.

NetworkManager

OpenShift requires NetworkManager on all nodes (see https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-networkmanager). Make sure it works:

nmcli g

Configure a Static IP Address

Assign a static IP address to the interface to be used by the OpenShift cluster, as described here: Adding a Static Ethernet Connection with NetworkManager.

Editing /etc/sysconfig/network-scripts/ifcfg* also works: Linux 7 Configuring a Network Interface.

Use a neutral address. The static IP address will be changed when the template is cloned into actual virtual machine instances.

Set Up DNS Resolver


Return Here.

Configure NetworkManager to relinquish management of /etc/resolv.conf: Network Manager and DNS Server.

Stop NetworkManager (otherwise it'll overwrite the file we're modifying):

 systemctl stop NetworkManager

Then, configure the DNS resolver to the environment DNS serve in /etc/resolv.conf:

ocp36.local

search openshift35.local
domain openshift35.local
nameserver <internal-ip-address-of-the-support-node-that-will-run-named>
nameserver 8.8.8.8

Note that the internal support node address should be temporarily commented out until the DNS server is fully configured and started on that node, otherwise all network operations will be extremely slow.

Also, turn off sshd client name DNS verification.

Subscription

Attach the node to the subscription, using subscription manager, as described here: registering a RHEL System with subscription manager. The support node(s) need only a Red Hat Enterprise Linux. The OpenShift nodes need an OpenShift subscription. For OpenShift, follow these steps: https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#host-registration.

The the summary of the sequence of steps is available below - the goal of these steps is to configure the following supported repositories on the system: "rhel-7-server-rpms", "rhel-7-server-extras-rpms", "rhel-7-server-ose-3.5-rpms", "rhel-7-fast-datapath-rpms":

subscription-manager register
subscription-manager list --available --matches '*OpenShift*'
subscription-manager attach --pool=<pool-id> --quantity=1
subscription-manager repos --disable="*"
subscription-manager repos --list-enabled
yum repolist
yum-config-manager --disable <repo_id>
subscription-manager repos --enable="rhel-7-server-rpms" --enable="rhel-7-server-extras-rpms" --enable="rhel-7-server-ose-3.5-rpms" --enable="rhel-7-fast-datapath-rpms"
subscription-manager repos --list-enabled
yum repolist
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec sos psacct zip unzip
yum update -y
yum install atomic-openshift-utils

Excluders

Prevent accidental upgrades of OpenShiift and Docker, by installing "excluder" packages. The *-excluder packages add entries to the "exclude" directive in the host’s /etc/yum.conf file when installed. Those entries can be removed later when we explicitly want to upgrade OpenShift or Docker. More details in yum Exclusion.

yum install atomic-openshift-excluder atomic-openshift-docker-excluder

If later we need to upgrade, we must run the following command:

atomic-openshift-excluder unexclude

At this point, no OpenShift binaries, except utilities, are installed. The advanced installer knows how to override this and it will install the binaries as expected, without any further intervention.

Reboot

Reboot the system to make sure it starts correctly after package installation:

systemctl reboot

Configure the Installation User

Create an installation user that can log in remotely from support.openshift35.local, which is the host that will drive the Ansible installation. Conventionally we name that user "ansible" and it must be able to passwordlessly ssh into itself and all other environment nodes, and also passwordless sudo. The user will be part of the template image.

 groupadd -g 2200 ansible
 useradd -m -g ansible -u 2200 ansible

Install the Public Installation Key

OpenShift installer requires passwordless public key-authenticated access.

Generate a public/private key pair as described here and install the public key into template's "ansible" .ssh/authorized_keys. The private key should be set aside and later installed onto the ansible installation host.

Also, allow passwordless sudo for the 'ansible' user.

Firewall Configuration

Turn off firewalld and configure the iptables service.

systemctl stop firewalld
systemctl disable firewalld
systemctl is-enabled firewalld
yum remove firewalld

OpenShift needs iptables running:

systemctl enable iptables
systemctl start iptables

NFS Client

Install NFS client dependencies.

SELinux

Make sure SELinux is enabled on all hosts. If is not, enable SELinux and make sure SELINUXTYPE is "targeted" in /etc/selinux/config.

 sestatus

Docker Installation

https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#installing-docker

Install Docker on the template. On a small number of guests, such as the proxies and the support host, it will simply not be activated. Docker is also technically not required on masters, it seems the installation will break if not available (more to comment here later). The binaries must be installed from the rhel-7-server-ose-3.*-rpms repository and have it running before installing OpenShift.

OpenShift 3.5 requires Docker 1.12.

yum install docker
docker version

The advanced installation procedure is supposed to update /etc/sysconfig/docker on nodes with OpenShift-specific configuration. The documentation says that the advanced installation procedure will add an "--insecure-registry" option, but that does not seem to be the case, so we add it manually in /etc/sysconfig/docker:

INSECURE_REGISTRY='--insecure-registry 172.30.0.0/16'

The subnet value used to configure the insecure registry corresponds to the default value of the services subnet.

Provision storage for the Docker server. The default loopback storage is not appropriate for production, it should be replaced by a thin-pool logical volume. Follow https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#configuring-docker-storage. Used Option A) "an additional block device". On VirtualBox or KVM, provision a new virtual disk and install it. At this stage, the size it is not important, as it will replaced with the actual storage when the nodes are built. Use 100 MB for the template.

KVM example (the template VM must be shut down prior to attaching the storage):

 virsh vol-create-as --pool main-storage-pool --name template-docker.raw --capacity 1024M
 virsh vol-list --pool main-storage-pool
 virsh attach-disk template /main-storage-pool/template-docker.raw vdb --config

Restart the template VM, the new storage should be available as /dev/vdb.

Then execute /usr/bin/docker-storage-setup with the base configuration read from /usr/lib/docker-storage-setup/docker-storage-setup and custom configuration specified in /etc/sysconfig/docker-storage-setup, similarly to:

STORAGE_DRIVER=devicemapper
DEVS=/dev/vdb
VG=docker_vg
# set to a little bit less than maximum amount of space available
DATA_SIZE=1023M
MIN_DATA_SIZE=1M

Setting DATA_SIZE too small caused nodes not being able to start and OpenShift OutOfDisk events.

Execute:

 /usr/bin/docker-storage-setup

Under some circumstances, /usr/bin/docker-storage-setup fails with:

[...]
end of partition 1 has impossible value for cylinders: 65 (should be in 0-64)
sfdisk: I don't like these partitions - nothing changed.
(If you really want this, use the --force option.)

If this happens, use the patched docker-storage-setup available here: https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/patches/node/usr/bin/docker-storage-setup

Before running it, remove any logical volume, volume group and physical volume that may have been created and leave an empty /dev/vdb1 partition. Then run

 /usr/bin/docker-storage-setup --force

After the script completes successfully, it creates a logical volume with an XFS filesystem mounted on docker root directory /var/lib/docker and the Docker storage configuration file /etc/sysconfig/docker-storage. The thin pool to be used by Docker should be visible in lvs:

# lvs

  LV          VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool docker_vg twi-a-t--- 500.00m             0.00   0.88
  root        main_vg   -wi-ao----   7.00g

Alternatively, you can follow the manual procedure of provisioning Docker storage on a dedicated block device:

Provision Docker Storage on a Dedicated Block Device

Disable docker-storage-setup, is not needed, storage already setup.

 systemctl disable docker-storage-setup
 systemctl is-enabled docker-storage-setup

Enable Docker at boot and start it.

 systemctl enable docker
 systemctl start docker
 
 systemctl status docker

Reboot the system and then check Docker Server Runtime.

TODO: parse and NOKB this: https://docs.openshift.com/container-platform/3.5/scaling_performance/optimizing_storage.html#optimizing-storage

Generic Docker installation instructions Docker Linux Installation.

Miscellaneous

Cloud-Provider Specific Configuration

Access/Proxy Host Configuration

Clone the Template

Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:

.

  • The access host needs only one storage device.

Post-Cloning

  • Turn off external root access.
  • Add external users.
  • Disable and remove Docker.

There is no need to install HAProxy manually to serve as master node load balancer, the OpenShift installation procedure will do it.

  • Add the IP addresses for masters, support and lb (itself) to /etc/hosts, the DNS server may not be operational when we need it.
192.168.122.10 in in.local lb lb.local
192.168.122.12 support support.local
192.168.122.13 master1 master1.local
192.168.122.14 master2 master2.local
192.168.122.16 master3 master3.local

Support Host Configuration

Clone the Template

Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:

  • Adjust the memory and the number of virtual CPUs.
  • The support host needs an additional block storage device, but it won't be used for Docker storage and it will be different in size.

Post-Cloning

  • Turn off docker and disable it
  • Remove the docker logical volume
  • Replace the docker storage with nfs storage - it should still be /dev/vdb1

NFS Server Installation

Install and configure an NFS server. The NFS server will serve storage for persistent volumes, metrics and logging. Build a dedicated storage device to be shared via NFS, but do not export individual volumes; the OpenShift installer will do that automatically. At this stage, make sure the storage is mounted, NFS server binaries are installed, and iptables is correctly configured to allow NFS to serve storage. A summary of these steps is presented in Provisioning and Installing a New Filesystem on a New Logical Volume and NFS Server Installation. All steps of the procedure must be executed, less the export part.

bind DNS Server Installation


During the next iteration, consider installing the DNS server on "in", as "in" is always on, and the load balancer also runs on "in" and it needs internal name resolution. Also make haproxy dependent on named.


Consider using master SkyDNS as the only name forwarder to the support DNS server.

bind DNS Server Installation

OpenShift Node Configuration

Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:

Adjust the memory and the number of virtual CPUs.

Configure the DNS client to use the DNS server that was installed as part of the procedure. See "manual /etc/resolv.conf Configuration" and https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-dns

Configure docker disk (note that the sequence described blow needs a modified /usr/bin/docker-storage-setup):

systemctl stop docker
unalias rm
rm -r /var/lib/docker
cat /dev/null > /etc/sysconfig/docker-storage
pvs
fdisk /dev/vdb
/usr/bin/docker-storage-setup --force
systemctl is-enabled docker
systemctl start docker
systemctl status docker

OpenShift Advanced Installation

https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html#install-config-install-advanced-install
https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example

The Support Node

Execute the installation as "ansible" from the support node.

As "root" on "support":

cd /usr/share/ansible
chgrp -R ansible /usr/share/ansible
chmod -R g+w /usr/share/ansible

The support node needs at least 1 GB or RAM to run the installation process.

Configure Ansible Inventory File

The default Ansible inventory file is /etc/ansible/hosts. It is used by the Ansible playbook to install the OpenShift environment. The inventory file describes the configuration and the topology of the OpenShift cluster. Start from a template like https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts and customize it to match the environment. Comments and additional documentation on some of the installation options are available here:

OpenShift Ansible hosts Inventory File

If the target nodes have multiple network interfaces, and the network interface used to cluster OpenShift is NOT associated with the default route, modify the inventory file as follows:

...
openshift_set_node_ip=true

...
[nodes]
master1.openshift35.local openshift_ip=172.23.0.4 ...

Patch Ansible Logic

External DNS Server Support

OpenShift 3.5 installer did not handle 'openshift_dns_ip' properly, dnsmasq/NewrokManager runtime ignored it. In order to fix it, has to use the following:

Pre-Flight

On the support node:

As "root":

cd /tmp
rm -r tmp* yum*
rm /usr/share/ansible/openshift-ansible/playbooks/byo/config.retry

As "ansible":

ansible all -m ping
ansible nodes -m shell -a "systemctl status docker"
ansible nodes -m shell -a "docker version"
ansible nodes -m shell -a "pvs"
ansible nodes -m shell -a "vgs"
ansible nodes -m shell -a "lvs"
ansible nodes -m shell -a "nslookup something.apps.openshift35.external"
ansible nodes -m shell -a "mkdir /mnt/tmp"
ansible nodes -m shell -a "mount -t nfs support.local:/support-nfs-storage /mnt/tmp"
ansible nodes -m shell -a "hostname > /mnt/tmp/\$(hostname).txt"
ansible nodes -m shell -a "umount /mnt/tmp"

Running the Advanced Installation

ansible-playbook -vvv /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

For verbose installation use -vvv or -vvvv.

To use a different inventory file than /etc/ansible/hosts, run:

ansible-playbook -vvv -i /custom/path/to/inventory/file /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Output of a Successful Run

PLAY RECAP *********************************************************************
infranode1.openshift35.local : ok=222  changed=60   unreachable=0    failed=0
infranode2.openshift35.local : ok=222  changed=60   unreachable=0    failed=0
lb.openshift35.local       : ok=75   changed=14   unreachable=0    failed=0
localhost                  : ok=12   changed=0    unreachable=0    failed=0
master1.openshift35.local  : ok=1033 changed=274  unreachable=0    failed=0
master2.openshift35.local  : ok=445  changed=132  unreachable=0    failed=0
master3.openshift35.local  : ok=445  changed=132  unreachable=0    failed=0
node1.openshift35.local    : ok=222  changed=60   unreachable=0    failed=0
node2.openshift35.local    : ok=222  changed=60   unreachable=0    failed=0
support.openshift35.local  : ok=77   changed=5    unreachable=0    failed=0

Verifying the Installation

OpenShift Installation Validation

Uninstallation

In case the installation procedure runs into problems, troubleshoot and before re-starting the installation procedure, uninstall:

ansible-playbook [-i /custom/path/to/inventory/file] /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml

This a relative quick method to iterate over the installation configuration and come up with a stable configuration. However, in order to install a production deployment, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, run the following on all hosts:

# yum -y remove openshift openshift-* etcd docker docker-common
# rm -rf /etc/origin /var/lib/openshift /etc/etcd \
    /var/lib/etcd /etc/sysconfig/atomic-openshift* /etc/sysconfig/docker* \
    /root/.kube/config /etc/ansible/facts.d /usr/share/openshift

Post-Install

Perform, in this order:

Adjust Management API/Console HAProxy Load Balancer Console

Configure the load balancer HAProxy on the "lb" node to log in/var/log/haproxy.log. The procedure is described here:

HAProxy Logging Configuration

Deploy an Application Traffic Proxy

In the case there is more than one router pod, or the infrastructure node the router pod has been deployed does not expose a public IP address, public application traffic, directed to the wildcard domain configured on the external DNS must be handled by a proxy that load balances between the router pods.

An extra HAProxy instance can do that. It can be installed and configured as follows:

Post-Install Logging Configuration

TODO this in the light of the fluentd DaemonSet reconfiguration:

In order for the logging pods to spread evenly across the cluster, an empty node selector should be used. The "logging" project should have been created already. The empty per-project node selector can be updates as follows:

oc edit namespace logging
...
metadata:
  annotations:
    openshift.io/node-selector: ""
...

Provision Persistent Storage

https://docs.openshift.com/container-platform/3.5/install_config/persistent_storage/persistent_storage_nfs.html#install-config-persistent-storage-persistent-storage-nfs

TODO

Deploy the Integrated Docker Registry

TODO: isn't this automatically deployed? See "Internal Registry Configuration" section of the inventory file. How do I check?

Load Image Streams

TODO

Load Templates

TODO

Diagnostics

From a master node, as root, execute:

oadm diagnostics

Optional Post-Install

Stand-alone Registry

https://docs.openshift.com/container-platform/3.5/install_config/install/stand_alone_registry.html#registry-installation-methods
Stand-alone Registry - Concepts