OpenShift 3.5 Installation: Difference between revisions
No edit summary |
|||
(131 intermediate revisions by the same user not shown) | |||
Line 7: | Line 7: | ||
* [[OpenShift Installation]] | * [[OpenShift Installation]] | ||
* [[OpenShift_Concepts#Installation|OpenShift Installation Concepts]] | * [[OpenShift_Concepts#Installation|OpenShift Installation Concepts]] | ||
* [[OpenShift hosts|OpenShift Ansible hosts Inventory File]] | |||
=Overview= | =Overview= | ||
Line 12: | Line 13: | ||
There are two installation methods: ''quick install'', which uses a CLI tool available in the "atomic-openshift-utils" package, which, in turn, uses [[Ansible]] in the background, and ''advanced install''. The advanced install assumes familiarity with [[Ansible]]. This document covers advanced install. | There are two installation methods: ''quick install'', which uses a CLI tool available in the "atomic-openshift-utils" package, which, in turn, uses [[Ansible]] in the background, and ''advanced install''. The advanced install assumes familiarity with [[Ansible]]. This document covers advanced install. | ||
This document also covers a HA (High Availability) installation. | |||
= | =Diagram= | ||
{{Internal|OpenShift 3.5 HA Installation Diagram|OpenShift 3.5 HA Installation Diagram}} | |||
=Archive= | |||
RackStation:/base/NovaOrdis/Archive/2017.06.29-openshift-3.5-HA | |||
noper430:/root/environments/2017.06.29-openshift-3.5-HA | |||
=Prerequisites= | |||
==Hardware Requirements== | |||
{{Internal|noper430 OpenShift 3.5 HA Installation Resources|noper430 OpenShift 3.5 HA Installation Resources}} | |||
==External DNS Setup== | |||
An [[OpenShift_Concepts#External_DNS_Server|external DNS server]] is required. Controlling a registrar (such as http://godaddy.com) DNS zone will provide all capabilities needed to set up external DNS address resolution. A valid alternative, though more complex, is to install a dedicated public DNS server. The DNS server must be available to all OpenShift environment nodes, and also to external clients than need to resolve public names such as the master public web console and API URL, the public wildcard name of the environment, the [[OpenShift_Concepts#Router|application router]] public DNS name, etc. A combination of a public DNS server that resolves public addresses and an internal DNS server, deployed as part of the environment, usually on the support node, tasked with resolving internal addresses is also a workable solution. This installation procedure includes installation of the bind DNS server on the support node - see [[#bind_DNS_Server_Installation|bind DNS server installation]] section below. | |||
== | ===Wildcard Domain for Application Traffic=== | ||
The DNS server that resolves the public addresses will need to be capable to support [[DNS_Concepts#Wildcard_Sub-Domain|wildcard sub-domains]] and resolve the public wildcard DNS entry to the public IP address of the [[OpenShift Concepts#Node|node]] that executes the [[OpenShift_Concepts#Router|default router]], or of the public ingress node that proxies to the default router. If the environment has multiple routers, an external load balancer is required, and the wildcard record must contain the public IP address of the host that runs the load balancer. The name of the wildcard domain will be later specified during the advanced installation procedure in the Ansible inventory file as [[OpenShift_hosts#openshift_master_default_subdomain|openshift_master_default_subdomain]]. | |||
Configuration procedures: | |||
* [[Bind_Operations_-_Set_Up_DNS_Server#Declare_a_Wildcard_DNS_Domain|Declare a Wildcard Subdomain for bind]] | |||
* [[godaddy.com#Setting_Up_a_Wildcard_DNS|Setting Up a Wildcard DNS at godaddy.com]] | |||
==O/S Requirements and Configuration== | ==O/S Requirements and Configuration== | ||
Line 45: | Line 48: | ||
<span id='Basic_OS'></span> | <span id='Basic_OS'></span> | ||
=== | ===Common Template Preparation=== | ||
<span id='Basic_OS_Template_Preparation'></span> | |||
{{Warn|Use the same template for | {{Warn|Use the same template for all hosts - support, OpenShift masters, infranodes and nodes - except ingress hosts, which should only come with a minimal installation, ssh and haproxy.}} | ||
====Provision the Virtual Machine==== | ====Provision the Virtual Machine==== | ||
Provisioning procedures: | |||
* [[Linux_Virtualization_Operations#Guest_Operations|KVM guests]] | |||
* [[VMware Fusion Virtual Machine Provisioning|VMware Fusion Virtual Machine Provisioning]] | |||
* [[VirtualBox_Virtual_Machine_Creation#Display|VirtualBox VMs]] | |||
====Perform a "minimal" RHEL installation==== | ====Perform a "minimal" RHEL installation==== | ||
Install RHEL 7.3 in "minimal" installation mode. | Install [[RHEL_7/Centos_7_Installation#Overview|RHEL 7.3 in "minimal" installation mode]]. See the [[#NetworkManager|NetworkManager]] section below. | ||
====NetworkManager==== | ====NetworkManager==== | ||
Line 69: | Line 74: | ||
====Configure a Static IP Address==== | ====Configure a Static IP Address==== | ||
Assign a static IP address to the interface to be used by the OpenShift cluster, as described here: [[NetworkManager_Operations#.E2.81.A0Adding_a_Static_Ethernet_Connection| | Assign a static IP address to the interface to be used by the OpenShift cluster, as described here: [[NetworkManager_Operations#.E2.81.A0Adding_a_Static_Ethernet_Connection|Adding a Static Ethernet Connection with NetworkManager]]. | ||
Editing /etc/sysconfig/network-scripts/ifcfg* also works: [[Linux 7 Configuring a Network Interface]]. | |||
The static IP address will be changed when the template is cloned into actual virtual machine instances. | Use a neutral address. The static IP address will be changed when the template is cloned into actual virtual machine instances. | ||
====Set Up DNS Resolver==== | ====Set Up DNS Resolver==== | ||
{{Error|Return Here.}} | |||
Configure NetworkManager to relinquish management of /etc/resolv.conf: [[NetworkManager_Operations#Specifying_a_DNS_Server|Network Manager and DNS Server]]. | Configure NetworkManager to relinquish management of /etc/resolv.conf: [[NetworkManager_Operations#Specifying_a_DNS_Server|Network Manager and DNS Server]]. | ||
Line 83: | Line 92: | ||
Then, configure the DNS resolver to the environment DNS serve in /etc/resolv.conf: | Then, configure the DNS resolver to the environment DNS serve in /etc/resolv.conf: | ||
ocp36.local | |||
<pre> | <pre> | ||
search openshift35.local | search openshift35.local | ||
Line 202: | Line 213: | ||
</pre> | </pre> | ||
The advanced installation procedure is supposed to update [[/etc/sysconfig/docker]] on nodes with OpenShift-specific configuration. The documentation says that the advanced installation procedure will add an "--insecure-registry" option, but that does not seem to be the case, so we add it manually in /etc/sysconfig/docker: | The advanced installation procedure is supposed to update [[Docker_Server_Configuration#RedHat.2FCentos|/etc/sysconfig/docker]] on nodes with OpenShift-specific configuration. The documentation says that the advanced installation procedure will add an "--insecure-registry" option, but that does not seem to be the case, so we add it manually in /etc/sysconfig/docker: | ||
<pre> | <pre> | ||
Line 208: | Line 219: | ||
</pre> | </pre> | ||
Provision [[OpenShift_Concepts#Docker_Storage_in_OpenShift|storage for the Docker server]]. The [[Docker_Concepts#Loopback_Storage|default loopback storage]] is not appropriate for production, it should be replaced by a [[Linux_Logical_Volume_Management_Concepts#Thinly-Provisioned_Logical_Volumes_.28Thin_Volumes.29|thin-pool logical volume]]. Follow https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#configuring-docker-storage. Used Option A) "an additional block device". On VirtualBox, [[VirtualBox_Operations#Creating_and_Installing_a_new_Virtual_Disk| | The subnet value used to configure the insecure registry corresponds to the default value of the [[OpenShift_Concepts#The_Services_Subnet|services subnet]]. | ||
Provision [[OpenShift_Concepts#Docker_Storage_in_OpenShift|storage for the Docker server]]. The [[Docker_Concepts#Loopback_Storage|default loopback storage]] is not appropriate for production, it should be replaced by a [[Linux_Logical_Volume_Management_Concepts#Thinly-Provisioned_Logical_Volumes_.28Thin_Volumes.29|thin-pool logical volume]]. Follow https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#configuring-docker-storage. Used Option A) "an additional block device". On VirtualBox or KVM, provision a new virtual disk and install it. At this stage, the size it is not important, as it will replaced with the actual storage when the nodes are built. Use 100 MB for the template. | |||
* [[VirtualBox_Operations#Creating_and_Installing_a_new_Virtual_Disk|Creating and installing a new virtual disk on VirtualBox]] | |||
* [[Virsh_vol-create-as|Creating a new logical volume]] on KVM, followed by attachment to the template. When creating the logical volume, name it "template-docker.storage", following the [[Linux_Virtualization_Naming_Conventions#Storage_Volume_Naming_Convention|storage volume naming conventions]]. | |||
KVM example (the template VM must be shut down prior to attaching the storage): | |||
virsh vol-create-as --pool main-storage-pool --name template-docker.raw --capacity 1024M | |||
STORAGE_DRIVER=devicemapper | virsh vol-list --pool main-storage-pool | ||
DEVS=/dev/ | virsh attach-disk template /main-storage-pool/template-docker.raw vdb --config | ||
VG=docker_vg | |||
DATA_SIZE= | Restart the template VM, the new storage should be available as /dev/vdb. | ||
MIN_DATA_SIZE=1M | |||
Then execute /usr/bin/docker-storage-setup with the base configuration read from [[/usr/lib/docker-storage-setup/docker-storage-setup]] and custom configuration specified in /etc/sysconfig/docker-storage-setup, similarly to: | |||
STORAGE_DRIVER=devicemapper | |||
DEVS=/dev/vdb | |||
VG=docker_vg | |||
# set to a little bit less than maximum amount of space available | |||
DATA_SIZE=<b>1023M</b> | |||
MIN_DATA_SIZE=1M | |||
{{Warn|Setting DATA_SIZE too small caused nodes not being able to start and OpenShift [[OpenShift Concepts#OutOfDisk|OutOfDisk events]].}} | |||
Execute: | |||
/usr/bin/docker-storage-setup | |||
Under some circumstances, /usr/bin/docker-storage-setup fails with: | Under some circumstances, /usr/bin/docker-storage-setup fails with: | ||
Line 229: | Line 257: | ||
</pre> | </pre> | ||
If this happens, use the patched docker-storage-setup available here: https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/patches/node/usr/bin/docker-storage-setup | |||
Before running it, remove any logical volume, volume group and physical volume that may have been created and leave an empty /dev/vdb1 partition. Then run | |||
/usr/bin/docker-storage-setup --force | |||
After the script completes successfully, it creates a logical volume with an XFS filesystem mounted on docker root directory /var/lib/docker and the Docker storage configuration file /etc/sysconfig/docker-storage. The thin pool to be used by Docker should be visible in [[lvs]]: | After the script completes successfully, it creates a logical volume with an XFS filesystem mounted on docker root directory /var/lib/docker and the Docker storage configuration file /etc/sysconfig/docker-storage. The thin pool to be used by Docker should be visible in [[lvs]]: | ||
Line 244: | Line 272: | ||
root main_vg -wi-ao---- 7.00g | root main_vg -wi-ao---- 7.00g | ||
</pre> | </pre> | ||
<span id='Ny62gV'></span>Alternatively, you can follow the manual procedure of provisioning Docker storage on a dedicated block device: | |||
{{Internal|Provision Docker Storage on a Dedicated Block Device|Provision Docker Storage on a Dedicated Block Device}} | |||
Disable docker-storage-setup, is not needed, storage already setup. | Disable docker-storage-setup, is not needed, storage already setup. | ||
systemctl disable docker-storage-setup | |||
systemctl disable docker-storage-setup | systemctl is-enabled docker-storage-setup | ||
Enable Docker at boot and start it. | Enable Docker at boot and start it. | ||
systemctl enable docker | |||
systemctl enable docker | systemctl start docker | ||
systemctl start docker | |||
systemctl status docker | |||
Reboot the system and then check [[Docker Server Runtime]]. | Reboot the system and then check [[Docker Server Runtime]]. | ||
Line 262: | Line 293: | ||
<font color=red>TODO: parse and NOKB this: https://docs.openshift.com/container-platform/3.5/scaling_performance/optimizing_storage.html#optimizing-storage</font> | <font color=red>TODO: parse and NOKB this: https://docs.openshift.com/container-platform/3.5/scaling_performance/optimizing_storage.html#optimizing-storage</font> | ||
Generic Docker installation instructions [[ | Generic Docker installation instructions [[Docker_Linux_Installation#Prerequisites|Docker Linux Installation]]. | ||
====Miscellaneous==== | ====Miscellaneous==== | ||
Line 274: | Line 305: | ||
* https://docs.openshift.com/container-platform/3.5/install_config/configuring_gce.html#install-config-configuring-gce | * https://docs.openshift.com/container-platform/3.5/install_config/configuring_gce.html#install-config-configuring-gce | ||
===Access/Proxy Host Configuration=== | |||
==== | ====Clone the Template==== | ||
Clone the template following the procedure described here [[Linux_Virtualization_Cloning_a_KVM_Guest_Virtual_Machine|Cloning a KVM Guest]]. While cloning, consider the following: | |||
* Adjust the memory and the number of virtual CPUs. | |||
* The access host needs two interfaces, one external and one internal. For details see [[Linux_Virtualization_Cloning_a_KVM_Guest_Virtual_Machine#Network|Cloning a KVM Guest - Networking]] and [[Attaching_a_Guest_Directly_to_a_Virtualization_Host_Network_Interface_with_a_macvtap_Driver#Overview|Attaching a Guest Directly to a Virtualization Host Network Interface with a macvtap Driver]]. | |||
. | |||
* The access host needs only one storage device. | |||
====Post-Cloning==== | |||
* Turn off external root access. | |||
* Add external users. | |||
* Disable and remove Docker. | |||
{{Warn|There is no need to install HAProxy manually to serve as master node load balancer, the OpenShift installation procedure will do it.}} | |||
* Add the IP addresses for masters, support and lb (itself) to /etc/hosts, the DNS server may not be operational when we need it. | |||
192.168.122.10 in in.local lb lb.local | |||
192.168.122.12 support support.local | |||
192.168.122.13 master1 master1.local | |||
192.168.122.14 master2 master2.local | |||
192.168.122.16 master3 master3.local | |||
===Support Host Configuration=== | |||
docker | |||
====Clone the Template==== | |||
< | |||
Clone the template following the procedure described here [[Linux_Virtualization_Cloning_a_KVM_Guest_Virtual_Machine|Cloning a KVM Guest]]. While cloning, consider the following: | |||
</ | * Adjust the memory and the number of virtual CPUs. | ||
* The support host needs an additional block storage device, but it won't be used for Docker storage and it will be different in size. | |||
====Post-Cloning==== | |||
* Turn off docker and disable it | |||
* Remove the docker logical volume | |||
* Replace the docker storage with nfs storage - it should still be /dev/vdb1 | |||
====NFS Server Installation==== | |||
Install and configure an NFS server. The NFS server will serve storage for [[OpenShift_Concepts#Persistent_Volume|persistent volumes]], [[OpenShift Concepts#Metrics|metrics]] and [[OpenShift Concepts#Logging|logging]]. Build a dedicated storage device to be shared via NFS, but do not export individual volumes; the OpenShift installer will do that automatically. At this stage, make sure the storage is mounted, NFS server binaries are installed, and iptables is correctly configured to allow NFS to serve storage. A summary of these steps is presented in [[Provisioning and Installing a New Filesystem on a New Logical Volume]] and [[Linux NFS Installation|NFS Server Installation]]. All steps of the procedure must be executed, less the export part. | |||
====bind DNS Server Installation==== | |||
<span id='bind_DNS_Server'></span> | |||
{{Error|During the next iteration, consider installing the DNS server on "in", as "in" is always on, and the load balancer also runs on "in" and it needs internal name resolution. Also make haproxy dependent on named.}} | |||
{{Error|Consider using master SkyDNS as the only name forwarder to the support DNS server.}} | |||
{{Internal|Bind_Operations_-_Set_Up_DNS_Server|bind DNS Server Installation}} | |||
===OpenShift Node Configuration=== | |||
Clone the template following the procedure described here [[Linux_Virtualization_Cloning_a_KVM_Guest_Virtual_Machine|Cloning a KVM Guest]]. While cloning, consider the following: | |||
Adjust the memory and the number of virtual CPUs. | |||
Configure the DNS client to use the DNS server that was installed as part of the procedure. See "[[/etc/resolv.conf#Manual_resolv.conf_Configuration|manual /etc/resolv.conf Configuration]]" and https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-dns | |||
Configure docker disk (note that the sequence described blow needs a modified /usr/bin/docker-storage-setup): | |||
systemctl stop docker | |||
unalias rm | |||
rm -r /var/lib/docker | |||
cat /dev/null > /etc/sysconfig/docker-storage | |||
pvs | |||
fdisk /dev/vdb | |||
/usr/bin/docker-storage-setup --force | |||
systemctl is-enabled docker | |||
systemctl start docker | |||
systemctl status docker | |||
=OpenShift Advanced Installation= | =OpenShift Advanced Installation= | ||
Line 318: | Line 387: | ||
Execute the installation as "ansible" from the support node. | Execute the installation as "ansible" from the support node. | ||
As "root" on "support": | |||
<pre> | <pre> | ||
Line 329: | Line 400: | ||
==Configure Ansible Inventory File== | ==Configure Ansible Inventory File== | ||
The default Ansible inventory file is /etc/ansible/hosts. It is used by the [[Ansible Concepts#Playbook|Ansible playbook]] to install the OpenShift environment. The inventory file describes the configuration and the topology of the [[OpenShift_Concepts#OpenShift_Cluster|OpenShift cluster]]. Start from a template like https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts and customize it to match the environment. | The default Ansible inventory file is /etc/ansible/hosts. It is used by the [[Ansible Concepts#Playbook|Ansible playbook]] to install the OpenShift environment. The inventory file describes the configuration and the topology of the [[OpenShift_Concepts#OpenShift_Cluster|OpenShift cluster]]. Start from a template like https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts and customize it to match the environment. Comments and additional documentation on some of the installation options are available here: | ||
{{Internal|OpenShift hosts|OpenShift Ansible hosts Inventory File}} | |||
If the target nodes have multiple network interfaces, and the network interface used to cluster OpenShift is NOT associated with the default route, modify the inventory file as follows: | If the target nodes have multiple network interfaces, and the network interface used to cluster OpenShift is NOT associated with the default route, modify the inventory file as follows: | ||
Line 356: | Line 429: | ||
As "root": | As "root": | ||
cd /tmp | |||
cd /tmp | rm -r tmp* yum* | ||
rm -r tmp* yum* | rm /usr/share/ansible/openshift-ansible/playbooks/byo/config.retry | ||
rm /usr/share/ansible/openshift-ansible/playbooks/byo/config.retry | |||
As "ansible": | As "ansible": | ||
ansible all -m ping | |||
ansible all -m ping | ansible nodes -m shell -a "systemctl status docker" | ||
ansible nodes -m shell -a "docker version" | ansible nodes -m shell -a "docker version" | ||
ansible nodes -m shell -a "nslookup something.apps.openshift35.external" | ansible nodes -m shell -a "pvs" | ||
ansible nodes -m shell -a "vgs" | |||
ansible nodes -m shell -a "lvs" | |||
ansible nodes -m shell -a "nslookup something.apps.openshift35.external" | |||
ansible nodes -m shell -a "mkdir /mnt/tmp" | |||
ansible nodes -m shell -a "mount -t nfs support.local:/support-nfs-storage /mnt/tmp" | |||
ansible nodes -m shell -a "hostname > /mnt/tmp/\$(hostname).txt" | |||
ansible nodes -m shell -a "umount /mnt/tmp" | |||
==Running the Advanced Installation== | ==Running the Advanced Installation== | ||
Line 431: | Line 501: | ||
=Post-Install= | =Post-Install= | ||
Perform, in this order: | |||
==Adjust Management API/Console HAProxy Load Balancer Console== | |||
<span id='HAProxy_Load_Balancer_Adjustments'></span> | |||
Configure the load balancer HAProxy on the "lb" node to log in/var/log/haproxy.log. The procedure is described here: {{Internal|HAProxy_Configuration#Logging_Configuration|HAProxy Logging Configuration}} | |||
<font color=red>TODO</font> | ==Deploy an Application Traffic Proxy== | ||
<span id='Deploy_a_HAProxy_Router'></span> | |||
In the case there is more than one [[OpenShift_Concepts#Router|router]] pod, or the infrastructure node the router pod has been deployed does not expose a public IP address, public application traffic, directed to the [[OpenShift_3.5_Installation#Wildcard_Domain_for_Application_Traffic|wildcard domain configured on the external DNS]] must be handled by a proxy that load balances between the router pods. | |||
An extra HAProxy instance can do that. It can be installed and configured as follows: | |||
* installation: [[HAProxy_Installation|HAProxy Installation]]. | |||
* configuration: [[HAProxy_Configuration|HAProxy Configuration]] (including logging) | |||
* allow iptables access to 443. | |||
==Post-Install Logging Configuration== | |||
<font color=red>'''TODO''' this in the light of the fluentd DaemonSet reconfiguration: | |||
In order for the logging pods to spread evenly across the cluster, an empty [[OpenShift Concepts#Node_Selector|node selector]] should be used. The "logging" project should have been created already. The empty [[OpenShift_Concepts#Per-Project_Node_Selector|per-project node selector]] can be updates as follows: | |||
oc edit namespace logging | |||
... | |||
metadata: | |||
annotations: | |||
<b>openshift.io/node-selector: ""</b> | |||
... | |||
</font> | |||
==Provision Persistent Storage== | ==Provision Persistent Storage== | ||
Line 456: | Line 551: | ||
<font color=red>TODO</font> | <font color=red>TODO</font> | ||
=Diagnostics= | |||
From a master node, as root, execute: | |||
[[oadm diagnostics]] | |||
=Optional Post-Install= | =Optional Post-Install= |
Latest revision as of 20:15, 29 April 2018
External
Internal
Overview
There are two installation methods: quick install, which uses a CLI tool available in the "atomic-openshift-utils" package, which, in turn, uses Ansible in the background, and advanced install. The advanced install assumes familiarity with Ansible. This document covers advanced install.
This document also covers a HA (High Availability) installation.
Diagram
Archive
RackStation:/base/NovaOrdis/Archive/2017.06.29-openshift-3.5-HA noper430:/root/environments/2017.06.29-openshift-3.5-HA
Prerequisites
Hardware Requirements
External DNS Setup
An external DNS server is required. Controlling a registrar (such as http://godaddy.com) DNS zone will provide all capabilities needed to set up external DNS address resolution. A valid alternative, though more complex, is to install a dedicated public DNS server. The DNS server must be available to all OpenShift environment nodes, and also to external clients than need to resolve public names such as the master public web console and API URL, the public wildcard name of the environment, the application router public DNS name, etc. A combination of a public DNS server that resolves public addresses and an internal DNS server, deployed as part of the environment, usually on the support node, tasked with resolving internal addresses is also a workable solution. This installation procedure includes installation of the bind DNS server on the support node - see bind DNS server installation section below.
Wildcard Domain for Application Traffic
The DNS server that resolves the public addresses will need to be capable to support wildcard sub-domains and resolve the public wildcard DNS entry to the public IP address of the node that executes the default router, or of the public ingress node that proxies to the default router. If the environment has multiple routers, an external load balancer is required, and the wildcard record must contain the public IP address of the host that runs the load balancer. The name of the wildcard domain will be later specified during the advanced installation procedure in the Ansible inventory file as openshift_master_default_subdomain.
Configuration procedures:
O/S Requirements and Configuration
Common Template Preparation
Use the same template for all hosts - support, OpenShift masters, infranodes and nodes - except ingress hosts, which should only come with a minimal installation, ssh and haproxy.
Provision the Virtual Machine
Provisioning procedures:
Perform a "minimal" RHEL installation
Install RHEL 7.3 in "minimal" installation mode. See the NetworkManager section below.
NetworkManager
OpenShift requires NetworkManager on all nodes (see https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-networkmanager). Make sure it works:
nmcli g
Configure a Static IP Address
Assign a static IP address to the interface to be used by the OpenShift cluster, as described here: Adding a Static Ethernet Connection with NetworkManager.
Editing /etc/sysconfig/network-scripts/ifcfg* also works: Linux 7 Configuring a Network Interface.
Use a neutral address. The static IP address will be changed when the template is cloned into actual virtual machine instances.
Set Up DNS Resolver
Return Here.
Configure NetworkManager to relinquish management of /etc/resolv.conf: Network Manager and DNS Server.
Stop NetworkManager (otherwise it'll overwrite the file we're modifying):
systemctl stop NetworkManager
Then, configure the DNS resolver to the environment DNS serve in /etc/resolv.conf:
ocp36.local
search openshift35.local domain openshift35.local nameserver <internal-ip-address-of-the-support-node-that-will-run-named> nameserver 8.8.8.8
Note that the internal support node address should be temporarily commented out until the DNS server is fully configured and started on that node, otherwise all network operations will be extremely slow.
Also, turn off sshd client name DNS verification.
Subscription
Attach the node to the subscription, using subscription manager, as described here: registering a RHEL System with subscription manager. The support node(s) need only a Red Hat Enterprise Linux. The OpenShift nodes need an OpenShift subscription. For OpenShift, follow these steps: https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#host-registration.
The the summary of the sequence of steps is available below - the goal of these steps is to configure the following supported repositories on the system: "rhel-7-server-rpms", "rhel-7-server-extras-rpms", "rhel-7-server-ose-3.5-rpms", "rhel-7-fast-datapath-rpms":
subscription-manager register subscription-manager list --available --matches '*OpenShift*' subscription-manager attach --pool=<pool-id> --quantity=1 subscription-manager repos --disable="*" subscription-manager repos --list-enabled yum repolist yum-config-manager --disable <repo_id> subscription-manager repos --enable="rhel-7-server-rpms" --enable="rhel-7-server-extras-rpms" --enable="rhel-7-server-ose-3.5-rpms" --enable="rhel-7-fast-datapath-rpms" subscription-manager repos --list-enabled yum repolist
- Install base packages (https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#installing-base-packages):
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec sos psacct zip unzip yum update -y yum install atomic-openshift-utils
Excluders
Prevent accidental upgrades of OpenShiift and Docker, by installing "excluder" packages. The *-excluder packages add entries to the "exclude" directive in the host’s /etc/yum.conf file when installed. Those entries can be removed later when we explicitly want to upgrade OpenShift or Docker. More details in yum Exclusion.
yum install atomic-openshift-excluder atomic-openshift-docker-excluder
If later we need to upgrade, we must run the following command:
atomic-openshift-excluder unexclude
At this point, no OpenShift binaries, except utilities, are installed. The advanced installer knows how to override this and it will install the binaries as expected, without any further intervention.
Reboot
Reboot the system to make sure it starts correctly after package installation:
systemctl reboot
Configure the Installation User
Create an installation user that can log in remotely from support.openshift35.local, which is the host that will drive the Ansible installation. Conventionally we name that user "ansible" and it must be able to passwordlessly ssh into itself and all other environment nodes, and also passwordless sudo. The user will be part of the template image.
groupadd -g 2200 ansible useradd -m -g ansible -u 2200 ansible
Install the Public Installation Key
OpenShift installer requires passwordless public key-authenticated access.
Generate a public/private key pair as described here and install the public key into template's "ansible" .ssh/authorized_keys. The private key should be set aside and later installed onto the ansible installation host.
Also, allow passwordless sudo for the 'ansible' user.
Firewall Configuration
Turn off firewalld and configure the iptables service.
systemctl stop firewalld systemctl disable firewalld systemctl is-enabled firewalld yum remove firewalld
OpenShift needs iptables running:
systemctl enable iptables systemctl start iptables
NFS Client
Install NFS client dependencies.
SELinux
Make sure SELinux is enabled on all hosts. If is not, enable SELinux and make sure SELINUXTYPE is "targeted" in /etc/selinux/config.
sestatus
Docker Installation
Install Docker on the template. On a small number of guests, such as the proxies and the support host, it will simply not be activated. Docker is also technically not required on masters, it seems the installation will break if not available (more to comment here later). The binaries must be installed from the rhel-7-server-ose-3.*-rpms repository and have it running before installing OpenShift.
OpenShift 3.5 requires Docker 1.12.
yum install docker docker version
The advanced installation procedure is supposed to update /etc/sysconfig/docker on nodes with OpenShift-specific configuration. The documentation says that the advanced installation procedure will add an "--insecure-registry" option, but that does not seem to be the case, so we add it manually in /etc/sysconfig/docker:
INSECURE_REGISTRY='--insecure-registry 172.30.0.0/16'
The subnet value used to configure the insecure registry corresponds to the default value of the services subnet.
Provision storage for the Docker server. The default loopback storage is not appropriate for production, it should be replaced by a thin-pool logical volume. Follow https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#configuring-docker-storage. Used Option A) "an additional block device". On VirtualBox or KVM, provision a new virtual disk and install it. At this stage, the size it is not important, as it will replaced with the actual storage when the nodes are built. Use 100 MB for the template.
- Creating and installing a new virtual disk on VirtualBox
- Creating a new logical volume on KVM, followed by attachment to the template. When creating the logical volume, name it "template-docker.storage", following the storage volume naming conventions.
KVM example (the template VM must be shut down prior to attaching the storage):
virsh vol-create-as --pool main-storage-pool --name template-docker.raw --capacity 1024M virsh vol-list --pool main-storage-pool virsh attach-disk template /main-storage-pool/template-docker.raw vdb --config
Restart the template VM, the new storage should be available as /dev/vdb.
Then execute /usr/bin/docker-storage-setup with the base configuration read from /usr/lib/docker-storage-setup/docker-storage-setup and custom configuration specified in /etc/sysconfig/docker-storage-setup, similarly to:
STORAGE_DRIVER=devicemapper DEVS=/dev/vdb VG=docker_vg # set to a little bit less than maximum amount of space available DATA_SIZE=1023M MIN_DATA_SIZE=1M
Setting DATA_SIZE too small caused nodes not being able to start and OpenShift OutOfDisk events.
Execute:
/usr/bin/docker-storage-setup
Under some circumstances, /usr/bin/docker-storage-setup fails with:
[...] end of partition 1 has impossible value for cylinders: 65 (should be in 0-64) sfdisk: I don't like these partitions - nothing changed. (If you really want this, use the --force option.)
If this happens, use the patched docker-storage-setup available here: https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/patches/node/usr/bin/docker-storage-setup
Before running it, remove any logical volume, volume group and physical volume that may have been created and leave an empty /dev/vdb1 partition. Then run
/usr/bin/docker-storage-setup --force
After the script completes successfully, it creates a logical volume with an XFS filesystem mounted on docker root directory /var/lib/docker and the Docker storage configuration file /etc/sysconfig/docker-storage. The thin pool to be used by Docker should be visible in lvs:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert docker-pool docker_vg twi-a-t--- 500.00m 0.00 0.88 root main_vg -wi-ao---- 7.00g
Alternatively, you can follow the manual procedure of provisioning Docker storage on a dedicated block device:
Disable docker-storage-setup, is not needed, storage already setup.
systemctl disable docker-storage-setup systemctl is-enabled docker-storage-setup
Enable Docker at boot and start it.
systemctl enable docker systemctl start docker systemctl status docker
Reboot the system and then check Docker Server Runtime.
TODO: parse and NOKB this: https://docs.openshift.com/container-platform/3.5/scaling_performance/optimizing_storage.html#optimizing-storage
Generic Docker installation instructions Docker Linux Installation.
Miscellaneous
Cloud-Provider Specific Configuration
- https://docs.openshift.com/container-platform/3.5/install_config/configuring_aws.html#install-config-configuring-aws
- https://docs.openshift.com/container-platform/3.5/install_config/configuring_openstack.html#install-config-configuring-openstack
- https://docs.openshift.com/container-platform/3.5/install_config/configuring_gce.html#install-config-configuring-gce
Access/Proxy Host Configuration
Clone the Template
Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:
- Adjust the memory and the number of virtual CPUs.
- The access host needs two interfaces, one external and one internal. For details see Cloning a KVM Guest - Networking and Attaching a Guest Directly to a Virtualization Host Network Interface with a macvtap Driver.
.
- The access host needs only one storage device.
Post-Cloning
- Turn off external root access.
- Add external users.
- Disable and remove Docker.
There is no need to install HAProxy manually to serve as master node load balancer, the OpenShift installation procedure will do it.
- Add the IP addresses for masters, support and lb (itself) to /etc/hosts, the DNS server may not be operational when we need it.
192.168.122.10 in in.local lb lb.local 192.168.122.12 support support.local 192.168.122.13 master1 master1.local 192.168.122.14 master2 master2.local 192.168.122.16 master3 master3.local
Support Host Configuration
Clone the Template
Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:
- Adjust the memory and the number of virtual CPUs.
- The support host needs an additional block storage device, but it won't be used for Docker storage and it will be different in size.
Post-Cloning
- Turn off docker and disable it
- Remove the docker logical volume
- Replace the docker storage with nfs storage - it should still be /dev/vdb1
NFS Server Installation
Install and configure an NFS server. The NFS server will serve storage for persistent volumes, metrics and logging. Build a dedicated storage device to be shared via NFS, but do not export individual volumes; the OpenShift installer will do that automatically. At this stage, make sure the storage is mounted, NFS server binaries are installed, and iptables is correctly configured to allow NFS to serve storage. A summary of these steps is presented in Provisioning and Installing a New Filesystem on a New Logical Volume and NFS Server Installation. All steps of the procedure must be executed, less the export part.
bind DNS Server Installation
During the next iteration, consider installing the DNS server on "in", as "in" is always on, and the load balancer also runs on "in" and it needs internal name resolution. Also make haproxy dependent on named.
Consider using master SkyDNS as the only name forwarder to the support DNS server.
OpenShift Node Configuration
Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:
Adjust the memory and the number of virtual CPUs.
Configure the DNS client to use the DNS server that was installed as part of the procedure. See "manual /etc/resolv.conf Configuration" and https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-dns
Configure docker disk (note that the sequence described blow needs a modified /usr/bin/docker-storage-setup):
systemctl stop docker unalias rm rm -r /var/lib/docker cat /dev/null > /etc/sysconfig/docker-storage pvs fdisk /dev/vdb /usr/bin/docker-storage-setup --force systemctl is-enabled docker systemctl start docker systemctl status docker
OpenShift Advanced Installation
The Support Node
Execute the installation as "ansible" from the support node.
As "root" on "support":
cd /usr/share/ansible chgrp -R ansible /usr/share/ansible chmod -R g+w /usr/share/ansible
The support node needs at least 1 GB or RAM to run the installation process.
Configure Ansible Inventory File
The default Ansible inventory file is /etc/ansible/hosts. It is used by the Ansible playbook to install the OpenShift environment. The inventory file describes the configuration and the topology of the OpenShift cluster. Start from a template like https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts and customize it to match the environment. Comments and additional documentation on some of the installation options are available here:
If the target nodes have multiple network interfaces, and the network interface used to cluster OpenShift is NOT associated with the default route, modify the inventory file as follows:
... openshift_set_node_ip=true ... [nodes] master1.openshift35.local openshift_ip=172.23.0.4 ...
Patch Ansible Logic
External DNS Server Support
OpenShift 3.5 installer did not handle 'openshift_dns_ip' properly, dnsmasq/NewrokManager runtime ignored it. In order to fix it, has to use the following:
- /usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/tasks/main.yml
- /usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/files/networkmanager/99-origin-dns.sh
Pre-Flight
On the support node:
As "root":
cd /tmp rm -r tmp* yum* rm /usr/share/ansible/openshift-ansible/playbooks/byo/config.retry
As "ansible":
ansible all -m ping ansible nodes -m shell -a "systemctl status docker" ansible nodes -m shell -a "docker version" ansible nodes -m shell -a "pvs" ansible nodes -m shell -a "vgs" ansible nodes -m shell -a "lvs" ansible nodes -m shell -a "nslookup something.apps.openshift35.external"
ansible nodes -m shell -a "mkdir /mnt/tmp" ansible nodes -m shell -a "mount -t nfs support.local:/support-nfs-storage /mnt/tmp" ansible nodes -m shell -a "hostname > /mnt/tmp/\$(hostname).txt" ansible nodes -m shell -a "umount /mnt/tmp"
Running the Advanced Installation
ansible-playbook -vvv /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
For verbose installation use -vvv or -vvvv.
To use a different inventory file than /etc/ansible/hosts, run:
ansible-playbook -vvv -i /custom/path/to/inventory/file /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
Output of a Successful Run
PLAY RECAP ********************************************************************* infranode1.openshift35.local : ok=222 changed=60 unreachable=0 failed=0 infranode2.openshift35.local : ok=222 changed=60 unreachable=0 failed=0 lb.openshift35.local : ok=75 changed=14 unreachable=0 failed=0 localhost : ok=12 changed=0 unreachable=0 failed=0 master1.openshift35.local : ok=1033 changed=274 unreachable=0 failed=0 master2.openshift35.local : ok=445 changed=132 unreachable=0 failed=0 master3.openshift35.local : ok=445 changed=132 unreachable=0 failed=0 node1.openshift35.local : ok=222 changed=60 unreachable=0 failed=0 node2.openshift35.local : ok=222 changed=60 unreachable=0 failed=0 support.openshift35.local : ok=77 changed=5 unreachable=0 failed=0
Verifying the Installation
Uninstallation
In case the installation procedure runs into problems, troubleshoot and before re-starting the installation procedure, uninstall:
ansible-playbook [-i /custom/path/to/inventory/file] /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
This a relative quick method to iterate over the installation configuration and come up with a stable configuration. However, in order to install a production deployment, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, run the following on all hosts:
# yum -y remove openshift openshift-* etcd docker docker-common # rm -rf /etc/origin /var/lib/openshift /etc/etcd \ /var/lib/etcd /etc/sysconfig/atomic-openshift* /etc/sysconfig/docker* \ /root/.kube/config /etc/ansible/facts.d /usr/share/openshift
Post-Install
Perform, in this order:
Adjust Management API/Console HAProxy Load Balancer Console
Configure the load balancer HAProxy on the "lb" node to log in/var/log/haproxy.log. The procedure is described here:
Deploy an Application Traffic Proxy
In the case there is more than one router pod, or the infrastructure node the router pod has been deployed does not expose a public IP address, public application traffic, directed to the wildcard domain configured on the external DNS must be handled by a proxy that load balances between the router pods.
An extra HAProxy instance can do that. It can be installed and configured as follows:
- installation: HAProxy Installation.
- configuration: HAProxy Configuration (including logging)
- allow iptables access to 443.
Post-Install Logging Configuration
TODO this in the light of the fluentd DaemonSet reconfiguration:
In order for the logging pods to spread evenly across the cluster, an empty node selector should be used. The "logging" project should have been created already. The empty per-project node selector can be updates as follows:
oc edit namespace logging
... metadata: annotations: openshift.io/node-selector: "" ...
Provision Persistent Storage
TODO
Deploy the Integrated Docker Registry
TODO: isn't this automatically deployed? See "Internal Registry Configuration" section of the inventory file. How do I check?
Load Image Streams
TODO
Load Templates
TODO
Diagnostics
From a master node, as root, execute:
oadm diagnostics