OpenShift 3.5 Installation: Difference between revisions
Line 457: | Line 457: | ||
{{Internal|OpenShift Installation Validation|OpenShift Installation Validation}} | {{Internal|OpenShift Installation Validation|OpenShift Installation Validation}} | ||
==Post-Installation Adjustments== | |||
===HAProxy Load Balancer Adjustments=== | |||
Configure the load balancer HAProxy to log in/var/log/haproxy.log. The procedure is described here: {{Internal|HAProxy_Configuration#Logging_Configuration|HAProxy Logging Configuration}} | |||
==Uninstallation== | ==Uninstallation== |
Revision as of 23:31, 4 July 2017
External
Internal
Overview
There are two installation methods: quick install, which uses a CLI tool available in the "atomic-openshift-utils" package, which, in turn, uses Ansible in the background, and advanced install. The advanced install assumes familiarity with Ansible. This document covers advanced install.
Prerequisites
External DNS Setup
An external DNS server is required. If you control a registrar DNS zone, such as http://godaddy.com, that will work. A valid alternative is to install a dedicated DNS server. The DNS server must be available to all OpenShift environment nodes, and also to external clients than need to resolve public names such as the master public web console and API URL, the application router public DNS name, etc. A combination of a public DNS server that resolve public addresses and an internal DNS server that resolve internal addresses is also valid: if you have different means of resolving public names to clients, a DNS server deployed as part of the environment will work. It can be deployed on the support node. This installation procedure includes installation of the bind DNS server on the support node - see "bind DNS server installation" section below.
Wildcard Domain for Application Traffic
The DNS server that resolves the public addresses will need to be capable to support wildcard sub-domains and resolve the public wildcard DNS entry to the public IP address of the node that executes the default router. If the environment has multiple routers, an external load balancer is required, and the wildcard record must contain the public IP address of the host that runs the load balancer. The name of the wildcard domain will be later specified during the advanced installation procedure in the Ansible inventory file as 'openshift_master_default_subdomain'.
For the configuration procedure see:
Minimum Hardware Requirements
A full RHEL7.3 master installation requires 121 MB in /boot and an average of 2.4 GB in /. A full RHEL7.3 node installation requires 121 MB in /boot and an average of 2.2 GB in /.
O/S Requirements and Configuration
Common Template Preparation
Use the same template for the support node and OpenShift nodes.
Provision the Virtual Machine
The procedure to build VirtualBox VMs is described here. The procedure to build VMware VMs is described here. The procedure to build a KVM guest is described here.
Perform a "minimal" RHEL installation
Install RHEL 7.3 in "minimal" installation mode. This document describes the installation in top of a a VirtualBox or VMware Fusion virtual machine:
NetworkManager
OpenShift requires NetworkManager on all nodes (see https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-networkmanager). Make sure it works:
nmcli g
Configure a Static IP Address
Assign a static IP address to the interface to be used by the OpenShift cluster, as described here: adding a Static Ethernet Connection with NetworkManager. Editing /etc/sysconfig/network-scripts/ifcfg* also works: Linux 7 Configuring a Network Interface.
The static IP address will be changed when the template is cloned into actual virtual machine instances.
Set Up DNS Resolver
Configure NetworkManager to relinquish management of /etc/resolv.conf: Network Manager and DNS Server.
Stop NetworkManager (otherwise it'll overwrite the file we're modifying):
systemctl stop NetworkManager
Then, configure the DNS resolver to the environment DNS serve in /etc/resolv.conf:
search openshift35.local domain openshift35.local nameserver <internal-ip-address-of-the-support-node-that-will-run-named> nameserver 8.8.8.8
Note that the internal support node address should be temporarily commented out until the DNS server is fully configured and started on that node, otherwise all network operations will be extremely slow.
Also, turn off sshd client name DNS verification.
Subscription
Attach the node to the subscription, using subscription manager, as described here: registering a RHEL System with subscription manager. The support node(s) need only a Red Hat Enterprise Linux. The OpenShift nodes need an OpenShift subscription. For OpenShift, follow these steps: https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#host-registration.
The the summary of the sequence of steps is available below - the goal of these steps is to configure the following supported repositories on the system: "rhel-7-server-rpms", "rhel-7-server-extras-rpms", "rhel-7-server-ose-3.5-rpms", "rhel-7-fast-datapath-rpms":
subscription-manager register subscription-manager list --available --matches '*OpenShift*' subscription-manager attach --pool=<pool-id> --quantity=1 subscription-manager repos --disable="*" subscription-manager repos --list-enabled yum repolist yum-config-manager --disable <repo_id> subscription-manager repos --enable="rhel-7-server-rpms" --enable="rhel-7-server-extras-rpms" --enable="rhel-7-server-ose-3.5-rpms" --enable="rhel-7-fast-datapath-rpms" subscription-manager repos --list-enabled yum repolist
- Install base packages (https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#installing-base-packages):
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec sos psacct zip unzip yum update -y yum install atomic-openshift-utils
Excluders
Prevent accidental upgrades of OpenShiift and Docker, by installing "excluder" packages. The *-excluder packages add entries to the "exclude" directive in the host’s /etc/yum.conf file when installed. Those entries can be removed later when we explicitly want to upgrade OpenShift or Docker. More details in yum Exclusion.
yum install atomic-openshift-excluder atomic-openshift-docker-excluder
If later we need to upgrade, we must run the following command:
atomic-openshift-excluder unexclude
At this point, no OpenShift binaries, except utilities, are installed. The advanced installer knows how to override this and it will install the binaries as expected, without any further intervention.
Reboot
Reboot the system to make sure it starts correctly after package installation:
systemctl reboot
Configure the Installation User
Create an installation user that can log in remotely from support.openshift35.local, which is the host that will drive the Ansible installation. Conventionally we name that user "ansible" and it must be able to passwordlessly ssh into itself and all other environment nodes, and also passwordless sudo. The user will be part of the template image.
groupadd -g 2200 ansible useradd -m -g ansible -u 2200 ansible
Install the Public Installation Key
OpenShift installer requires passwordless public key-authenticated access.
Generate a public/private key pair as described here and install the public key into template's "ansible" .ssh/authorized_keys. The private key should be set aside and later installed onto the ansible installation host.
Also, allow passwordless sudo for the 'ansible' user.
Firewall Configuration
Turn off firewalld and configure the iptables service.
systemctl stop firewalld systemctl disable firewalld systemctl is-enabled firewalld yum remove firewalld
OpenShift needs iptables running:
systemctl enable iptables systemctl start iptables
NFS Client
Install NFS client dependencies.
SELinux
Make sure SELinux is enabled on all hosts. If is not, enable SELinux and make sure SELINUXTYPE is "targeted" in /etc/selinux/config.
sestatus
Docker Installation
Install Docker on the template. On a small number of guests, such as the proxies and the support host, it will simply not be activated. Docker is also technically not required on masters, it seems the installation will break if not available (more to comment here later). The binaries must be installed from the rhel-7-server-ose-3.*-rpms repository and have it running before installing OpenShift.
OpenShift 3.5 requires Docker 1.12.
yum install docker docker version
The advanced installation procedure is supposed to update /etc/sysconfig/docker on nodes with OpenShift-specific configuration. The documentation says that the advanced installation procedure will add an "--insecure-registry" option, but that does not seem to be the case, so we add it manually in /etc/sysconfig/docker:
INSECURE_REGISTRY='--insecure-registry 172.30.0.0/16'
The subnet value used to configure the insecure registry corresponds to the default value of the services subnet.
Provision storage for the Docker server. The default loopback storage is not appropriate for production, it should be replaced by a thin-pool logical volume. Follow https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#configuring-docker-storage. Used Option A) "an additional block device". On VirtualBox or KVM, provision a new virtual disk and install it. At this stage, the size it is not important, as it will replaced with the actual storage when the nodes are built. Use 100 MB for the template.
- Creating and installing a new virtual disk on VirtualBox
- Creating a new logical volume on KVM, followed by attachment to the template. When creating the logical volume, name it "template-docker.storage", following the storage volume naming conventions.
KVM example (the template VM must be shut down prior to attaching the storage):
virsh vol-create-as --pool main-storage-pool --name template-docker.storage --capacity 100M virsh vol-list --pool main-storage-pool virsh attach-disk template /main-storage-pool/template-docker.storage vdb --config
Restart the template VM, the new storage should be available as /dev/vdb.
Then execute /usr/bin/docker-storage-setup with the base configuration read from /usr/lib/docker-storage-setup/docker-storage-setup and custom configuration specified in /etc/sysconfig/docker-storage-setup, similarly to:
STORAGE_DRIVER=devicemapper DEVS=/dev/vdb VG=docker_vg DATA_SIZE=50M MIN_DATA_SIZE=1M
Execute:
/usr/bin/docker-storage-setup
Under some circumstances, /usr/bin/docker-storage-setup fails with:
[...] end of partition 1 has impossible value for cylinders: 65 (should be in 0-64) sfdisk: I don't like these partitions - nothing changed. (If you really want this, use the --force option.)
If this happens, use the patched docker-storage-setup available here: https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/patches/node/usr/bin/docker-storage-setup
Before running it, remove any logical volume, volume group and physical volume that may have been created and leave an empty /dev/vdb1 partition. Then run
/usr/bin/docker-storage-setup --force
After the script completes successfully, it creates a logical volume with an XFS filesystem mounted on docker root directory /var/lib/docker and the Docker storage configuration file /etc/sysconfig/docker-storage. The thin pool to be used by Docker should be visible in lvs:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert docker-pool docker_vg twi-a-t--- 500.00m 0.00 0.88 root main_vg -wi-ao---- 7.00g
Alternatively, you can follow the manual procedure of provisioning Docker storage on a dedicated block device:
Disable docker-storage-setup, is not needed, storage already setup.
systemctl disable docker-storage-setup systemctl is-enabled docker-storage-setup
Enable Docker at boot and start it.
systemctl enable docker systemctl start docker systemctl status docker
Reboot the system and then check Docker Server Runtime.
TODO: parse and NOKB this: https://docs.openshift.com/container-platform/3.5/scaling_performance/optimizing_storage.html#optimizing-storage
Generic Docker installation instructions Docker Installation.
Miscellaneous
Cloud-Provider Specific Configuration
- https://docs.openshift.com/container-platform/3.5/install_config/configuring_aws.html#install-config-configuring-aws
- https://docs.openshift.com/container-platform/3.5/install_config/configuring_openstack.html#install-config-configuring-openstack
- https://docs.openshift.com/container-platform/3.5/install_config/configuring_gce.html#install-config-configuring-gce
Access/Proxy Host Configuration
Clone the Template
Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:
- Adjust the memory and the number of virtual CPUs.
- The access host needs two interfaces, one external and one internal. For details see Cloning a KVM Guest - Networking and Attaching a Guest Directly to a Virtualization Host Network Interface.
- The access host needs only one storage device.
Post-Cloning
- Turn off external root access.
- Add external users.
- Disable and remove Docker.
There is no need to install HAProxy manually to serve as master node load balancer, the OpenShift installation procedure will do it.
- Add the IP addresses for masters, support and lb (itself) to /etc/hosts, the DNS server may not be operational when we need it.
192.168.122.10 in in.local lb lb.local 192.168.122.12 support support.local 192.168.122.13 master1 master1.local 192.168.122.14 master2 master2.local 192.168.122.16 master3 master3.local
Support Host Configuration
Clone the Template
Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:
- Adjust the memory and the number of virtual CPUs.
- The support host needs an additional block storage device, but it won't be used for Docker storage and it will be different in size.
Post-Cloning
- Turn off docker and disable it
- Remove the docker logical volume
- Replace the docker storage with nfs storage - it should still be /dev/vdb1
NFS Server Installation
Install and configure an NFS server. The NFS server will serve storage for persistent volumes, metrics and logging. Build a dedicated storage device to be shared via NFS, but do not export individual volumes; the OpenShift installer will do that automatically. At this stage, make sure the storage is mounted, NFS server binaries are installed, and iptables is correctly configured to allow NFS to serve storage. A summary of these steps is presented in Provisioning and Installing a New Filesystem on a New Logical Volume and NFS Server Installation. All steps of the procedure must be executed, less the export part.
bind DNS Server Installation
During the next iteration, consider installing the DNS server on "in", as "in" is always on, and the load balancer also runs on "in" and it needs internal name resolution. Also make haproxy dependent on named.
OpenShift Node Configuration
Clone the template following the procedure described here Cloning a KVM Guest. While cloning, consider the following:
Adjust the memory and the number of virtual CPUs.
Configure the DNS client to use the DNS server that was installed as part of the procedure. See "manual /etc/resolv.conf Configuration" and https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html#prereq-dns
Configure docker disk (note that the sequence described blow needs a modified /usr/bin/docker-storage-setup):
systemctl stop docker unalias rm rm -r /var/lib/docker cat /dev/null > /etc/sysconfig/docker-storage pvs fdisk /dev/vdb /usr/bin/docker-storage-setup --force systemctl is-enabled docker systemctl start docker systemctl status docker
OpenShift Advanced Installation
The Support Node
Execute the installation as "ansible" from the support node.
As "root" on "support":
cd /usr/share/ansible chgrp -R ansible /usr/share/ansible chmod -R g+w /usr/share/ansible
The support node needs at least 1 GB or RAM to run the installation process.
Configure Ansible Inventory File
The default Ansible inventory file is /etc/ansible/hosts. It is used by the Ansible playbook to install the OpenShift environment. The inventory file describes the configuration and the topology of the OpenShift cluster. Start from a template like https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts and customize it to match the environment.
If the target nodes have multiple network interfaces, and the network interface used to cluster OpenShift is NOT associated with the default route, modify the inventory file as follows:
... openshift_set_node_ip=true ... [nodes] master1.openshift35.local openshift_ip=172.23.0.4 ...
Patch Ansible Logic
External DNS Server Support
OpenShift 3.5 installer did not handle 'openshift_dns_ip' properly, dnsmasq/NewrokManager runtime ignored it. In order to fix it, has to use the following:
- /usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/tasks/main.yml
- /usr/share/ansible/openshift-ansible/roles/openshift_node_dnsmasq/files/networkmanager/99-origin-dns.sh
Pre-Flight
On the support node:
As "root":
cd /tmp rm -r tmp* yum* rm /usr/share/ansible/openshift-ansible/playbooks/byo/config.retry
As "ansible":
ansible all -m ping ansible nodes -m shell -a "systemctl status docker" ansible nodes -m shell -a "docker version" ansible nodes -m shell -a "pvs" ansible nodes -m shell -a "vgs" ansible nodes -m shell -a "lvs" ansible nodes -m shell -a "nslookup something.apps.openshift35.external"
ansible nodes -m shell -a "mkdir /mnt/tmp" ansible nodes -m shell -a "mount -t nfs support.local:/support-nfs-storage /mnt/tmp" ansible nodes -m shell -a "hostname > /mnt/tmp/\$(hostname).txt" ansible nodes -m shell -a "umount /mnt/tmp"
Running the Advanced Installation
ansible-playbook -vvv /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
For verbose installation use -vvv or -vvvv.
To use a different inventory file than /etc/ansible/hosts, run:
ansible-playbook -vvv -i /custom/path/to/inventory/file /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
Output of a Successful Run
PLAY RECAP ********************************************************************* infranode1.openshift35.local : ok=222 changed=60 unreachable=0 failed=0 infranode2.openshift35.local : ok=222 changed=60 unreachable=0 failed=0 lb.openshift35.local : ok=75 changed=14 unreachable=0 failed=0 localhost : ok=12 changed=0 unreachable=0 failed=0 master1.openshift35.local : ok=1033 changed=274 unreachable=0 failed=0 master2.openshift35.local : ok=445 changed=132 unreachable=0 failed=0 master3.openshift35.local : ok=445 changed=132 unreachable=0 failed=0 node1.openshift35.local : ok=222 changed=60 unreachable=0 failed=0 node2.openshift35.local : ok=222 changed=60 unreachable=0 failed=0 support.openshift35.local : ok=77 changed=5 unreachable=0 failed=0
Verifying the Installation
Post-Installation Adjustments
HAProxy Load Balancer Adjustments
Configure the load balancer HAProxy to log in/var/log/haproxy.log. The procedure is described here:
Uninstallation
In case the installation procedure runs into problems, troubleshoot and before re-starting the installation procedure, uninstall:
ansible-playbook [-i /custom/path/to/inventory/file] /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
This a relative quick method to iterate over the installation configuration and come up with a stable configuration. However, in order to install a production deployment, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, run the following on all hosts:
# yum -y remove openshift openshift-* etcd docker docker-common # rm -rf /etc/origin /var/lib/openshift /etc/etcd \ /var/lib/etcd /etc/sysconfig/atomic-openshift* /etc/sysconfig/docker* \ /root/.kube/config /etc/ansible/facts.d /usr/share/openshift
Post-Install
Deploy a HAProxy Router
In the case there is more than one router pod, the public application traffic, directed to the wildcard domain configured on the external DNS must be handled by a proxy that load balances between the router pods.
This load balancer must be deployed.
TODO
Provision Persistent Storage
TODO
Deploy the Integrated Docker Registry
TODO: isn't this automatically deployed? See "Internal Registry Configuration" section of the inventory file. How do I check?
Load Image Streams
TODO
Load Templates
TODO