OpenShift Node Operations: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(32 intermediate revisions by the same user not shown)
Line 15: Line 15:
  oc [[Oc_get#node.2C_nodes|get nodes]]
  oc [[Oc_get#node.2C_nodes|get nodes]]
  oc [[Oc_get#node.2C_nodes|get node]] ''<node-name>''
  oc [[Oc_get#node.2C_nodes|get node]] ''<node-name>''
==Resource Allocation (CPU, Memory) per Pod on a Node==
  oc describe node/''<node-name>''
  oc describe node/''<node-name>''
oc describe node ''<node-name>''


=Starting/Stopping a Node=
=Starting/Stopping a Node=
==Stop a Node==
Stop the node process: [[OpenShift_Runtime#Node_Process_Management_Operations|Node Process Management Operations]].
==Start a Node==
Start the node process: [[OpenShift_Runtime#Node_Process_Management_Operations|Node Process Management Operations]].


=Get Labels Applied to a Node=
=Get Labels Applied to a Node=
Line 25: Line 38:
=Update Labels on a Node=
=Update Labels on a Node=


<font color=red>TODO: https://docs.openshift.com/container-platform/3.5/admin_guide/manage_nodes.html#updating-labels-on-nodes</font>
{{External|https://docs.openshift.com/container-platform/3.5/admin_guide/manage_nodes.html#updating-labels-on-nodes}}
 
oc label node <''node''> <key_1>=<value_1> ... <key_n>=<value_n>
 
==Removing a Label==
 
Note the dash at the end of the key name.
 
oc label node <''node''> <''key-to-be-removed''>-
 
oc label node node1 blah-


=Other Procedures=
=Other Procedures=
Line 31: Line 54:
* [[Oadm_manage-node#--schedulable|Make a node schedulable/unschedulable]]
* [[Oadm_manage-node#--schedulable|Make a node schedulable/unschedulable]]


=Adding a new Node=
=Adding a New Node=
 
{{External|https://docs.openshift.com/container-platform/3.5/install_config/adding_hosts_to_existing_cluster.html#adding-nodes-advanced}}
 
New nodes can be added with the scaleup.yml playbook. The playbook generates and distributes new certificates for the new hosts, then runs the configuration playbook on the new hosts only. However, before running the playbook, the host must be prepared in the same way the other nodes were. If the nodes were created based on a template, the same template should be used to stand up the new node. For more details, see:
 
{{Internal|OpenShift_3.5_Installation#OpenShift_Node_Configuration|OpenShift 3.5 Installation Procedure - Node Configuration}}
 
Use the same [[OpenShift hosts|Ansible inventory file]] that was used to install the rest of the cluster.
 
On the Ansible host, update to the latest version of 'atomic-openshift-utils'
 
yum update atomic-openshift-utils
 
In the [[OpenShift hosts|Ansible inventory file]], add "new_<''host_type''> to the [OSEv3:children] section:
 
[OSEv3:children]
masters
etcd
nodes
<font color='orange'>new_nodes</font>
lb
nfs
 
Create a [new_<''host_type''>] section following the patterns already in place for the similar types of nodes:
 
[new_nodes]
 
node3.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'lab'}"
 
Execute the playbook:
 
cd /etc/ansible
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml
 
After the installation of the new nodes completes, move any hosts that have been defined in the [new_<''host_type''>] section into their appropriate section (but leave the [new_<''host_type''>] section definition itself in place, albeit empty) so that subsequent runs using this inventory file are aware of the nodes but do not handle them as new nodes.
 
[nodes]
...
node1.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'app'}"
node2.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'app'}"
<font color=orange>node3.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'lab'}"</font>
[new_nodes]


=Removing a Node=
=Removing a Node=


<font color=red>TODO: https://docs.openshift.com/container-platform/3.5/admin_guide/manage_nodes.html#deleting-nodes</font>
{{External|https://docs.openshift.com/container-platform/latest/admin_guide/manage_nodes.html#deleting-nodes}}
 
A node is deleted using the master API, but before deleting it, all pods running on it must be evacuated. This is because when deleting a node with the API, the node is deleted from the cluster state, but the pods that exist on the node are to deleted. The pods not managed by replication controllers will become inaccessible. Pods backed by replication controllers will be rescheduled on other nodes, and the [[OpenShift_Concepts#Local_Manifest_Pod|local manifest pods]] would need to be manually deleted.
 
Procedure to evacuate a pod:
 
{{Internal|OpenShift_Pod_Operations#Evacuate_Pods_from_Node|Evacuate Pods from Node}}
 
Then delete the node object:
 
oc delete node <''node-name''>
 
Optionally, release the virtualization resources: storage volume, guest definition, etc.
 
=Rebooting a Node=
 
{{External|https://docs.openshift.com/container-platform/latest/admin_guide/manage_nodes.html#rebooting-nodes}}
 
=Adding a new Master=
 
{{External|https://docs.openshift.com/container-platform/latest/install_config/adding_hosts_to_existing_cluster.html#adding-nodes-advanced}}

Latest revision as of 21:11, 19 February 2018

External

Internal

Overview

A node is a Linux container host. More details about nodes are available here:

OpenShift Concepts - Nodes

Getting Information about a Node

oc get nodes
oc get node <node-name>

Resource Allocation (CPU, Memory) per Pod on a Node

oc describe node/<node-name>
oc describe node <node-name>

Starting/Stopping a Node

Stop a Node

Stop the node process: Node Process Management Operations.

Start a Node

Start the node process: Node Process Management Operations.

Get Labels Applied to a Node

Get Labels Applied to a Node

Update Labels on a Node

https://docs.openshift.com/container-platform/3.5/admin_guide/manage_nodes.html#updating-labels-on-nodes
oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>

Removing a Label

Note the dash at the end of the key name.

oc label node <node> <key-to-be-removed>- 
oc label node node1 blah-

Other Procedures

Adding a New Node

https://docs.openshift.com/container-platform/3.5/install_config/adding_hosts_to_existing_cluster.html#adding-nodes-advanced

New nodes can be added with the scaleup.yml playbook. The playbook generates and distributes new certificates for the new hosts, then runs the configuration playbook on the new hosts only. However, before running the playbook, the host must be prepared in the same way the other nodes were. If the nodes were created based on a template, the same template should be used to stand up the new node. For more details, see:

OpenShift 3.5 Installation Procedure - Node Configuration

Use the same Ansible inventory file that was used to install the rest of the cluster.

On the Ansible host, update to the latest version of 'atomic-openshift-utils'

yum update atomic-openshift-utils

In the Ansible inventory file, add "new_<host_type> to the [OSEv3:children] section:

[OSEv3:children]
masters
etcd
nodes
new_nodes
lb
nfs

Create a [new_<host_type>] section following the patterns already in place for the similar types of nodes:

[new_nodes]
node3.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'lab'}"

Execute the playbook:

cd /etc/ansible
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml

After the installation of the new nodes completes, move any hosts that have been defined in the [new_<host_type>] section into their appropriate section (but leave the [new_<host_type>] section definition itself in place, albeit empty) so that subsequent runs using this inventory file are aware of the nodes but do not handle them as new nodes.

[nodes]
...
node1.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'app'}"
node2.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'app'}"
node3.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'lab'}"

[new_nodes]

Removing a Node

https://docs.openshift.com/container-platform/latest/admin_guide/manage_nodes.html#deleting-nodes

A node is deleted using the master API, but before deleting it, all pods running on it must be evacuated. This is because when deleting a node with the API, the node is deleted from the cluster state, but the pods that exist on the node are to deleted. The pods not managed by replication controllers will become inaccessible. Pods backed by replication controllers will be rescheduled on other nodes, and the local manifest pods would need to be manually deleted.

Procedure to evacuate a pod:

Evacuate Pods from Node

Then delete the node object:

oc delete node <node-name>

Optionally, release the virtualization resources: storage volume, guest definition, etc.

Rebooting a Node

https://docs.openshift.com/container-platform/latest/admin_guide/manage_nodes.html#rebooting-nodes

Adding a new Master

https://docs.openshift.com/container-platform/latest/install_config/adding_hosts_to_existing_cluster.html#adding-nodes-advanced