OpenShift hosts: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
No edit summary
 
(90 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Internal=
=Internal=


* [[OpenShift_3.5_Installation#Configure_Ansible_Inventory_File|OpenShift 3.5. Installation]]
* [[OpenShift_3.5_Installation#Configure_Ansible_Inventory_File|OpenShift 3.5 Installation]]
* [[OpenShift_3.6_Installation#Configure_Ansible_Inventory_File|OpenShift 3.6 Installation]]
 
=Examples=
 
* OpenShift 3.5: https://github.com/openshift/openshift-ansible/blob/release-1.5/inventory/byo/hosts.ose.example and corresponding defaults: https://github.com/openshift/openshift-ansible/blob/release-1.5/roles/openshift_metrics/defaults/main.yaml
* OpenShift 3.6: https://github.com/openshift/openshift-ansible/blob/release-3.6/inventory/byo/hosts.ose.example and corresponding defaults: https://github.com/openshift/openshift-ansible/blob/release-3.6/roles/openshift_metrics/defaults/main.yaml
* Configured template that worked for noper430 OpenShift 3.5 HA installation: https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts


=Overview=
=Overview=


Documentation for https://github.com/NovaOrdis/playground/blob/master/openshift/3.5/hosts
The default Ansible inventory file is /etc/ansible/hosts. It is used by the [[Ansible Concepts#Playbook|Ansible playbook]] to install the OpenShift environment. The inventory file describes the configuration and the topology of the [[OpenShift_Concepts#OpenShift_Cluster|OpenShift cluster]]. A generally accepted method is to start from a template like the ones linked from [[#Examples|Examples]] above and customize it to match the environment.


Defaults are available here: {{External|https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_metrics/defaults/main.yaml}}
=Configuration Elements=


=Ansible Configuration=
==Ansible Configuration==
 
{{Warn|If anything else than "root" is used for installation, both "ansible_ssh_user" and "ansible_become" must be set.}}
 
====ansible_ssh_user====
 
The ssh user used by Ansible to connect to hosts. This user should allow ssh based authentication without requiring a password, and also it should allow passwordless sudo to root. If using ssh key based authentication, then the key should be managed by an ssh agent. Also see [[#ansible_become|ansible_become]]. See [[OpenShift_3.6_Installation#The_Support_Node|Support Node]] for O/S level configuration required by an "ansible" user.


====ansible_become====
====ansible_become====
Line 15: Line 28:
If [[#ansible_ssh_user|ansible_ssh_user]] is not root, ansible_become must be set to true and the user must be configured for passwordless sudo.
If [[#ansible_ssh_user|ansible_ssh_user]] is not root, ansible_become must be set to true and the user must be configured for passwordless sudo.


====ansible_ssh_user====
==Disable Requirement Check==
 
====openshift_disable_check====
 
Do this only if you know what you are doing:
 
openshift_disable_check=disk_availability,memory_availability
 
==General Settings==


The ssh user used by Ansible to connect to hosts. This user should allow ssh based authentication without requiring a password, and also it should allow passwordless sudo to root. If using ssh key based authentication, then the key should be managed by an ssh agent. Also see [[#ansible_become|ansible_become]].
====openshift_enable_unsupported_configurations====


=General Settings=
Enable unsupported configurations, things that will yield a partially functioning cluster but would not be supported for production use.


====debug_level====
====debug_level====
Line 30: Line 51:
* 8 to log body-level API debugging information
* 8 to log body-level API debugging information


This can also be configured after installation by following the procedure described here: [[OpenShift_Logging_Levels#Change_the_Log_Level_for_OpenShift_Processes|Change the Log Level for OpenShift Processes]].
This can also be configured after installation by following the procedure described here: {{Internal|OpenShift_Logging_Levels#Change_the_Log_Level_for_OpenShift_Processes|Change the Log Level for OpenShift Processes}}


====containerized====
====openshift_use_system_containers====


If set to true, containerized OpenShift services (instead of RPM-based) are run on all nodes. The default is "false", which means the default RPM method is used. RHEL Atomic Host requires the containerized method, which is automatically selected for you based on the detection of the /run/ostree-booted file. Since 3.1.1.
If set to true, containerized OpenShift services (instead of RPM-based) are run on all nodes. The default is "false", which means the default RPM method is used. RHEL Atomic Host requires the containerized method, which is automatically selected for you based on the detection of the /run/ostree-booted file. Since 3.1.1.


====deployment_type====
====openshift_deployment_type====


Deployment type ("origin" or "openshift-enterprise").  
Deployment type ("origin" or "openshift-enterprise").  
Line 42: Line 63:
For more details see: {{External|https://docs.openshift.com/container-platform/3.5/install_config/install/advanced_install.html#advanced-install-deployment-types}}
For more details see: {{External|https://docs.openshift.com/container-platform/3.5/install_config/install/advanced_install.html#advanced-install-deployment-types}}


=Logging Configuration=
====openshift_release====
 
====openshift_install_examples====
 
Set to true to install example imagestreams and templates during install and upgrade.
 
====openshift_master_identity_providers====
 
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
 
====openshift_master_htpasswd_file====
 
Path on the support node, usually /etc/ansible/installation-htpasswd
 
Make sure "ansible" can read it.


{{Internal|OpenShift Logging Concepts#Installation|OpenShift Logging Concepts}}
====osm_use_cockpit====
 
osm_use_cockpit=true
 
====osm_cockpit_plugins====
 
osm_cockpit_plugins=['cockpit-kubernetes']
 
====openshift_master_cluster_method====
 
The "native" method implies a load balancer. If no "lb" group is defined, the installer assumes that a load balancer has been independently deployed and pre-configured. If a host is defined in the "lb" section of the inventory file, Ansible installs and configures HAProxy automatically on that host. For this HA method, 'openshift_master_cluster_hostname' must resolve to the internal hostname of the load balancer or to one or all of internal hostnames of the masters defined in the inventory if no load balancer is present.
 
openshift_master_cluster_method=native
 
====openshift_master_cluster_hostname====
 
openshift_master_cluster_hostname=api-lb.ocp36.local
 
====openshift_master_cluster_public_hostname====
 
openshift_master_cluster_public_hostname=master.openshift.novaordis.io
 
====openshift_master_default_subdomain====
 
The default subdomain to use for exposed routes. This name must be a valid wildcard DNS subdomain and resolve correctly to a publicly accessible IP address both 1) externally and 2) by the DNS serving the OpenShift cluster.
 
openshift_master_default_subdomain=apps.openshift.novaordis.io
 
====osm_default_node_selector====
 
Override the node selector that projects will use by default when placing pods.
 
osm_default_node_selector='env=app'
 
====openshift_hosted_router_selector====
 
An OpenShift router will be created during install if there are nodes present with labels matching the default router selector "env=infra". Set openshift_node_labels per node as needed in order to label nodes (example: node.example.com openshift_node_labels="{'env': 'infra'}") The router selector (the labels nodes need to expose for a router to be created may be changed with 'openshift_hosted_router_selector'. The default value is 'region=infra'
 
openshift_hosted_router_selector='env=infra'
 
====openshift_hosted_router_replicas====
 
Router replicas (optional) - Unless specified, openshift-ansible will calculate the replica count based on the number of nodes matching the openshift router selector.
 
openshift_hosted_router_replicas=1
 
====openshift_hosted_registry_selector====
 
An OpenShift registry will be created during install if there are nodes present with labels matching the default registry selector, "env=infra". Set openshift_node_labels per node as needed in order to label nodes.
 
openshift_hosted_registry_selector='env=infra'
 
====openshift_registry_selector====
 
openshift_registry_selector='env=infra'
 
====openshift_hosted_registry_replicas====
 
Registry replicas. Unless specified, openshift-ansible will calculate the replica count based on the number of nodes matching the openshift registry selector.
 
openshift_hosted_registry_replicas=1
 
====openshift_hosted_registry_storage_kind====
 
Most common option is "NFS Host Group": declare a host in the [nfs] group, with the assumption there's a NFS server running on it. An NFS volume will be created with path ''<nfs_directory>''/''<volume_name>'' on the host within the [nfs] host group.
 
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/nfs
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
# The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
openshift_hosted_registry_storage_volume_size=10Gi
 
====openshift_hosted_metrics_deploy====
 
By default metrics are not automatically deployed, set this to enable them.
 
openshift_hosted_metrics_deploy=true
 
====openshift_hosted_registry_storage_kind====
 
Most common option is "NFS Host Group": declare a host in the [nfs] group, with the assumption there's a NFS server running on it. An NFS volume will be created with path ''<nfs_directory>''/''<volume_name>'' on the host within the [nfs] host group.
 
openshift_hosted_metrics_storage_kind=nfs
openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
openshift_hosted_metrics_storage_nfs_directory=/nfs
openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_metrics_storage_volume_name=metrics
# The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
openshift_hosted_metrics_storage_volume_size=10Gi
openshift_hosted_metrics_storage_labels={'storage': 'metrics'}
 
====openshift_hosted_metrics_public_url====
 
Override metricsPublicURL in the master config for cluster metrics. Defaults to https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics. If you alter this variable, ensure the host name is accessible via your router. Currently, you may only alter the hostname portion of the url, altering the "/hawkular/metrics" path will break installation of metrics.  This name must be a valid host name and resolve correctly to an accessible IP address both 1) externally and 2) by the DNS serving the OpenShift cluster.
 
openshift_hosted_metrics_public_url=https://hawkular-metrics.apps.openshift.novaordis.io/hawkular/metrics
 
====openshift_metrics_hawkular_replicas====
 
The number of replicas for Hawkular metrics.
 
openshift_metrics_cassandra_replicas=1
 
====openshift_metrics_cassandra_replicas====
 
The number of Cassandra nodes for the metrics stack. This value dictates the number of Cassandra replication controllers.
 
openshift_metrics_cassandra_replicas=1
 
====openshift_metrics_cassandra_storage_type====
 
The storage directory specified below must exist on the NFS server and must be backed by a device with sufficient  storage. Ansible will configure the NFS server to export nfs_director/volume_name.
 
openshift_metrics_cassandra_storage_type=nfs
openshift_hosted_metrics_storage_kind=nfs
openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
openshift_hosted_metrics_storage_nfs_directory=/nfs
openshift_hosted_metrics_storage_volume_name=metrics
openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)'
# The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
openshift_hosted_metrics_storage_volume_size=10Gi
 
====openshift_hosted_metrics_cassandra_nodeselector====
 
openshift_hosted_metrics_cassandra_nodeselector='env=infra'
 
====openshift_hosted_metrics_hawkular_nodeselector====
 
openshift_hosted_metrics_hawkular_nodeselector='env=infra'
 
====openshift_hosted_metrics_heapster_nodeselector====
 
openshift_hosted_metrics_heapster_nodeselector='env=infra'
 
====openshift_metrics_resolution====
 
The time interval between two successive readings, in seconds. Defined as a number and time identifier: seconds (s), minutes (m), hours (h). Default is 30 seconds.
 
openshift_metrics_resolution=1m
 
====openshift_metrics_duration====
 
The number of days to store metrics before they are purged. Default value is 7 days.
 
openshift_metrics_duration=1
 
====Cassandra Limits====
 
Memory request and limit for the Cassandra database pod. Default is 2Gi. which limits Cassandra to 2 GB of memory. This value could be further adjusted by the start script based on available memory of the node on which it is scheduled. The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
 
openshift_metrics_cassandra_requests_memory=1Gi
openshift_metrics_cassandra_limits_memory=1Gi
 
The CPU request and limit for the Cassandra pod. For example, a value of 4000m (4000 millicores) would limit Cassandra to 4 CPUs.
 
openshift_metrics_cassandra_limits_cpu=1000m
openshift_metrics_cassandra_requests_cpu=1000m
 
====Hawkular Limits====
 
Requests and limits for Hawkular memory. A value of 2Gi would request 2 GB of memory. The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
 
openshift_metrics_hawkular_requests_memory=1Gi
openshift_metrics_hawkular_limits_memory=1Gi
 
Requests and limits for Hawkular CPU. A value of 4000m (4000 millicores) would request 4 CPUs.
 
openshift_metrics_hawkular_requests_cpu=1000m
openshift_metrics_hawkular_limits_cpu=1000m
 
====Heapsters Limits====
 
Requests and limits for Heapster memory. A value of 2Gi would request 2 GB of memory.The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
 
openshift_metrics_heapster_requests_memory=500Mi
openshift_metrics_heapster_limits_memory=500Mi
 
Requests and limits for Heapster CPU. A value of 4000m (4000 millicores) would request 4 CPUs.
 
openshift_metrics_heapster_requests_cpu=500m
openshift_metrics_heapster_limits_cpu=500m


====openshift_hosted_logging_deploy====
====openshift_hosted_logging_deploy====


[[OpenShift_Logging_Concepts#Overview|Logging infrastructure]] deployment is disabled by default. It can be enabled by setting:
Currently logging deployment is disabled by default, enable it by setting openshift_hosted_logging_deploy=true


  openshift_hosted_logging_deploy=true
  openshift_hosted_logging_deploy=true


==Other Logging Configuration Options==
More details about logging infrastructure:
 
{{Internal|OpenShift_Logging_Concepts#Overview|Logging infrastructure}}
 
Other Logging Configuration Options: https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#aggregate-logging-ansible-variables
 
====openshift_hosted_logging_storage_kind====
 
Option A - NFS Host Group. An NFS volume will be created with path "nfs_directory/volume_name" on the host within the [nfs] host group.  For example, the volume path using these options would be "/nfs/logging"
 
openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/nfs
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
openshift_hosted_logging_storage_labels={'storage': 'logging'}
 
====openshift_hosted_logging_hostname====
 
openshift_hosted_logging_hostname=kibana.apps.openshift.novaordis.io
 
====openshift_logging_es_cluster_size====
 
openshift_logging_es_cluster_size=1
 
====openshift_hosted_logging_elasticsearch_cluster_size====
 
Configure the number of elasticsearch nodes, unless you're using dynamic provisioning this value must be 1.
 
openshift_hosted_logging_elasticsearch_cluster_size=1
 
====openshift_logging_es_memory_limit====
 
openshift_logging_es_memory_limit=2G
 
====openshift_logging_es_nodeselector====
 
openshift_logging_es_nodeselector={'env':'infra'}
 
====openshift_logging_curator_nodeselector====
 
openshift_logging_curator_nodeselector={'env':'infra'}
 
====openshift_logging_kibana_nodeselector====
 
openshift_logging_kibana_nodeselector={'env':'infra'}
 
====os_sdn_network_plugin_name====
 
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
 
====openshift_master_api_port====
 
openshift_master_api_port=443
 
====openshift_master_console_port====
 
openshift_master_console_port=443
 
====openshift_set_node_ip====
 
Configure node IP in the node config. This is needed in cases where node traffic is desired to go over an interface other than the default network interface. If this attribute is set to true, then each node declaration must contain an 'openshift_ip' host variable configured with the IP address of the interface to use.
 
...
openshift_set_node_ip=true
 
...
[nodes]
master1.openshift35.local openshift_ip=172.23.0.4 ...
 
====openshift_dns_ip====
 
See: {{Internal|OpenShift_Concepts#Optional_Support_DNS_Server|OpenShift Concepts - Optional Support DNS Server}}
 
====openshift_clock_enabled====
 
Enables Network Time Protocol (NTP) to prevent masters and nodes in the cluster from going out of sync. It also configures usage of openshift_clock role. Must be enabled on masters to ensure proper failover.
 
openshift_clock_enabled=true
 
="Magic" Settings that Must Be Set Otherwise Things Get Configured Incorrectly=
 
<pre>
openshift_hosted_loggingops_storage_kind=nfs
openshift_hosted_loggingops_storage_access_modes=['ReadWriteOnce']
openshift_hosted_loggingops_storage_nfs_directory=/nfs
openshift_hosted_loggingops_storage_nfs_options='*(rw,root_squash)'
# use the default name
#openshift_hosted_loggingops_storage_volume_name=logging-es-ops
openshift_hosted_loggingops_storage_volume_size=10Gi


{{External|https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#aggregate-logging-ansible-variables}}
openshift_hosted_etcd_storage_nfs_directory=/nfs
</pre>

Latest revision as of 01:23, 14 November 2017

Internal

Examples

Overview

The default Ansible inventory file is /etc/ansible/hosts. It is used by the Ansible playbook to install the OpenShift environment. The inventory file describes the configuration and the topology of the OpenShift cluster. A generally accepted method is to start from a template like the ones linked from Examples above and customize it to match the environment.

Configuration Elements

Ansible Configuration


If anything else than "root" is used for installation, both "ansible_ssh_user" and "ansible_become" must be set.

ansible_ssh_user

The ssh user used by Ansible to connect to hosts. This user should allow ssh based authentication without requiring a password, and also it should allow passwordless sudo to root. If using ssh key based authentication, then the key should be managed by an ssh agent. Also see ansible_become. See Support Node for O/S level configuration required by an "ansible" user.

ansible_become

If ansible_ssh_user is not root, ansible_become must be set to true and the user must be configured for passwordless sudo.

Disable Requirement Check

openshift_disable_check

Do this only if you know what you are doing:

openshift_disable_check=disk_availability,memory_availability

General Settings

openshift_enable_unsupported_configurations

Enable unsupported configurations, things that will yield a partially functioning cluster but would not be supported for production use.

debug_level

Describes which INFO messages are logged to the systemd-journald.service. Set one of the following:

  • 0 to log errors and warnings only
  • 2 to log normal information (default)
  • 4 to log debugging-level information
  • 6 to log API-level (request/response) debugging information
  • 8 to log body-level API debugging information

This can also be configured after installation by following the procedure described here:

Change the Log Level for OpenShift Processes

openshift_use_system_containers

If set to true, containerized OpenShift services (instead of RPM-based) are run on all nodes. The default is "false", which means the default RPM method is used. RHEL Atomic Host requires the containerized method, which is automatically selected for you based on the detection of the /run/ostree-booted file. Since 3.1.1.

openshift_deployment_type

Deployment type ("origin" or "openshift-enterprise").

For more details see:

https://docs.openshift.com/container-platform/3.5/install_config/install/advanced_install.html#advanced-install-deployment-types

openshift_release

openshift_install_examples

Set to true to install example imagestreams and templates during install and upgrade.

openshift_master_identity_providers

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

openshift_master_htpasswd_file

Path on the support node, usually /etc/ansible/installation-htpasswd

Make sure "ansible" can read it.

osm_use_cockpit

osm_use_cockpit=true

osm_cockpit_plugins

osm_cockpit_plugins=['cockpit-kubernetes']

openshift_master_cluster_method

The "native" method implies a load balancer. If no "lb" group is defined, the installer assumes that a load balancer has been independently deployed and pre-configured. If a host is defined in the "lb" section of the inventory file, Ansible installs and configures HAProxy automatically on that host. For this HA method, 'openshift_master_cluster_hostname' must resolve to the internal hostname of the load balancer or to one or all of internal hostnames of the masters defined in the inventory if no load balancer is present.

openshift_master_cluster_method=native

openshift_master_cluster_hostname

openshift_master_cluster_hostname=api-lb.ocp36.local

openshift_master_cluster_public_hostname

openshift_master_cluster_public_hostname=master.openshift.novaordis.io

openshift_master_default_subdomain

The default subdomain to use for exposed routes. This name must be a valid wildcard DNS subdomain and resolve correctly to a publicly accessible IP address both 1) externally and 2) by the DNS serving the OpenShift cluster.

openshift_master_default_subdomain=apps.openshift.novaordis.io

osm_default_node_selector

Override the node selector that projects will use by default when placing pods.

osm_default_node_selector='env=app'

openshift_hosted_router_selector

An OpenShift router will be created during install if there are nodes present with labels matching the default router selector "env=infra". Set openshift_node_labels per node as needed in order to label nodes (example: node.example.com openshift_node_labels="{'env': 'infra'}") The router selector (the labels nodes need to expose for a router to be created may be changed with 'openshift_hosted_router_selector'. The default value is 'region=infra'

openshift_hosted_router_selector='env=infra'

openshift_hosted_router_replicas

Router replicas (optional) - Unless specified, openshift-ansible will calculate the replica count based on the number of nodes matching the openshift router selector.

openshift_hosted_router_replicas=1

openshift_hosted_registry_selector

An OpenShift registry will be created during install if there are nodes present with labels matching the default registry selector, "env=infra". Set openshift_node_labels per node as needed in order to label nodes.

openshift_hosted_registry_selector='env=infra'

openshift_registry_selector

openshift_registry_selector='env=infra'

openshift_hosted_registry_replicas

Registry replicas. Unless specified, openshift-ansible will calculate the replica count based on the number of nodes matching the openshift registry selector.

openshift_hosted_registry_replicas=1

openshift_hosted_registry_storage_kind

Most common option is "NFS Host Group": declare a host in the [nfs] group, with the assumption there's a NFS server running on it. An NFS volume will be created with path <nfs_directory>/<volume_name> on the host within the [nfs] host group.

openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/nfs
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
# The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
openshift_hosted_registry_storage_volume_size=10Gi

openshift_hosted_metrics_deploy

By default metrics are not automatically deployed, set this to enable them.

openshift_hosted_metrics_deploy=true

openshift_hosted_registry_storage_kind

Most common option is "NFS Host Group": declare a host in the [nfs] group, with the assumption there's a NFS server running on it. An NFS volume will be created with path <nfs_directory>/<volume_name> on the host within the [nfs] host group.

openshift_hosted_metrics_storage_kind=nfs
openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
openshift_hosted_metrics_storage_nfs_directory=/nfs
openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_metrics_storage_volume_name=metrics
# The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
openshift_hosted_metrics_storage_volume_size=10Gi
openshift_hosted_metrics_storage_labels={'storage': 'metrics'}

openshift_hosted_metrics_public_url

Override metricsPublicURL in the master config for cluster metrics. Defaults to https://hawkular-metrics.Template:Openshift master default subdomain/hawkular/metrics. If you alter this variable, ensure the host name is accessible via your router. Currently, you may only alter the hostname portion of the url, altering the "/hawkular/metrics" path will break installation of metrics. This name must be a valid host name and resolve correctly to an accessible IP address both 1) externally and 2) by the DNS serving the OpenShift cluster.

openshift_hosted_metrics_public_url=https://hawkular-metrics.apps.openshift.novaordis.io/hawkular/metrics

openshift_metrics_hawkular_replicas

The number of replicas for Hawkular metrics.

openshift_metrics_cassandra_replicas=1

openshift_metrics_cassandra_replicas

The number of Cassandra nodes for the metrics stack. This value dictates the number of Cassandra replication controllers.

openshift_metrics_cassandra_replicas=1

openshift_metrics_cassandra_storage_type

The storage directory specified below must exist on the NFS server and must be backed by a device with sufficient storage. Ansible will configure the NFS server to export nfs_director/volume_name.

openshift_metrics_cassandra_storage_type=nfs
openshift_hosted_metrics_storage_kind=nfs
openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
openshift_hosted_metrics_storage_nfs_directory=/nfs
openshift_hosted_metrics_storage_volume_name=metrics
openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)'
# The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
openshift_hosted_metrics_storage_volume_size=10Gi

openshift_hosted_metrics_cassandra_nodeselector

openshift_hosted_metrics_cassandra_nodeselector='env=infra'

openshift_hosted_metrics_hawkular_nodeselector

openshift_hosted_metrics_hawkular_nodeselector='env=infra'

openshift_hosted_metrics_heapster_nodeselector

openshift_hosted_metrics_heapster_nodeselector='env=infra'

openshift_metrics_resolution

The time interval between two successive readings, in seconds. Defined as a number and time identifier: seconds (s), minutes (m), hours (h). Default is 30 seconds.

openshift_metrics_resolution=1m

openshift_metrics_duration

The number of days to store metrics before they are purged. Default value is 7 days.

openshift_metrics_duration=1

Cassandra Limits

Memory request and limit for the Cassandra database pod. Default is 2Gi. which limits Cassandra to 2 GB of memory. This value could be further adjusted by the start script based on available memory of the node on which it is scheduled. The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'

openshift_metrics_cassandra_requests_memory=1Gi
openshift_metrics_cassandra_limits_memory=1Gi

The CPU request and limit for the Cassandra pod. For example, a value of 4000m (4000 millicores) would limit Cassandra to 4 CPUs.

openshift_metrics_cassandra_limits_cpu=1000m
openshift_metrics_cassandra_requests_cpu=1000m

Hawkular Limits

Requests and limits for Hawkular memory. A value of 2Gi would request 2 GB of memory. The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'

openshift_metrics_hawkular_requests_memory=1Gi
openshift_metrics_hawkular_limits_memory=1Gi

Requests and limits for Hawkular CPU. A value of 4000m (4000 millicores) would request 4 CPUs.

openshift_metrics_hawkular_requests_cpu=1000m
openshift_metrics_hawkular_limits_cpu=1000m

Heapsters Limits

Requests and limits for Heapster memory. A value of 2Gi would request 2 GB of memory.The quantity must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'

openshift_metrics_heapster_requests_memory=500Mi
openshift_metrics_heapster_limits_memory=500Mi

Requests and limits for Heapster CPU. A value of 4000m (4000 millicores) would request 4 CPUs.

openshift_metrics_heapster_requests_cpu=500m
openshift_metrics_heapster_limits_cpu=500m

openshift_hosted_logging_deploy

Currently logging deployment is disabled by default, enable it by setting openshift_hosted_logging_deploy=true

openshift_hosted_logging_deploy=true

More details about logging infrastructure:

Logging infrastructure

Other Logging Configuration Options: https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#aggregate-logging-ansible-variables

openshift_hosted_logging_storage_kind

Option A - NFS Host Group. An NFS volume will be created with path "nfs_directory/volume_name" on the host within the [nfs] host group. For example, the volume path using these options would be "/nfs/logging"

openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/nfs
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
openshift_hosted_logging_storage_labels={'storage': 'logging'}

openshift_hosted_logging_hostname

openshift_hosted_logging_hostname=kibana.apps.openshift.novaordis.io

openshift_logging_es_cluster_size

openshift_logging_es_cluster_size=1

openshift_hosted_logging_elasticsearch_cluster_size

Configure the number of elasticsearch nodes, unless you're using dynamic provisioning this value must be 1.

openshift_hosted_logging_elasticsearch_cluster_size=1

openshift_logging_es_memory_limit

openshift_logging_es_memory_limit=2G

openshift_logging_es_nodeselector

openshift_logging_es_nodeselector={'env':'infra'}

openshift_logging_curator_nodeselector

openshift_logging_curator_nodeselector={'env':'infra'}

openshift_logging_kibana_nodeselector

openshift_logging_kibana_nodeselector={'env':'infra'}

os_sdn_network_plugin_name

os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'

openshift_master_api_port

openshift_master_api_port=443

openshift_master_console_port

openshift_master_console_port=443

openshift_set_node_ip

Configure node IP in the node config. This is needed in cases where node traffic is desired to go over an interface other than the default network interface. If this attribute is set to true, then each node declaration must contain an 'openshift_ip' host variable configured with the IP address of the interface to use.

...
openshift_set_node_ip=true
...
[nodes]
master1.openshift35.local openshift_ip=172.23.0.4 ...

openshift_dns_ip

See:

OpenShift Concepts - Optional Support DNS Server

openshift_clock_enabled

Enables Network Time Protocol (NTP) to prevent masters and nodes in the cluster from going out of sync. It also configures usage of openshift_clock role. Must be enabled on masters to ensure proper failover.

openshift_clock_enabled=true

"Magic" Settings that Must Be Set Otherwise Things Get Configured Incorrectly

openshift_hosted_loggingops_storage_kind=nfs
openshift_hosted_loggingops_storage_access_modes=['ReadWriteOnce']
openshift_hosted_loggingops_storage_nfs_directory=/nfs
openshift_hosted_loggingops_storage_nfs_options='*(rw,root_squash)'
# use the default name
#openshift_hosted_loggingops_storage_volume_name=logging-es-ops
openshift_hosted_loggingops_storage_volume_size=10Gi

openshift_hosted_etcd_storage_nfs_directory=/nfs