OpenShift Node Operations: Difference between revisions
Line 23: | Line 23: | ||
=Starting/Stopping a Node= | =Starting/Stopping a Node= | ||
==Stop a Node== | |||
==Start a Node== | |||
=Get Labels Applied to a Node= | =Get Labels Applied to a Node= |
Revision as of 21:06, 19 February 2018
External
Internal
Overview
A node is a Linux container host. More details about nodes are available here:
Getting Information about a Node
oc get nodes oc get node <node-name>
Resource Allocation (CPU, Memory) per Pod on a Node
oc describe node/<node-name>
oc describe node <node-name>
Starting/Stopping a Node
Stop a Node
Start a Node
Get Labels Applied to a Node
Update Labels on a Node
oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>
Removing a Label
Note the dash at the end of the key name.
oc label node <node> <key-to-be-removed>-
oc label node node1 blah-
Other Procedures
Adding a New Node
New nodes can be added with the scaleup.yml playbook. The playbook generates and distributes new certificates for the new hosts, then runs the configuration playbook on the new hosts only. However, before running the playbook, the host must be prepared in the same way the other nodes were. If the nodes were created based on a template, the same template should be used to stand up the new node. For more details, see:
Use the same Ansible inventory file that was used to install the rest of the cluster.
On the Ansible host, update to the latest version of 'atomic-openshift-utils'
yum update atomic-openshift-utils
In the Ansible inventory file, add "new_<host_type> to the [OSEv3:children] section:
[OSEv3:children] masters etcd nodes new_nodes lb nfs
Create a [new_<host_type>] section following the patterns already in place for the similar types of nodes:
[new_nodes]
node3.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'lab'}"
Execute the playbook:
cd /etc/ansible ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml
After the installation of the new nodes completes, move any hosts that have been defined in the [new_<host_type>] section into their appropriate section (but leave the [new_<host_type>] section definition itself in place, albeit empty) so that subsequent runs using this inventory file are aware of the nodes but do not handle them as new nodes.
[nodes] ... node1.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'app'}" node2.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'app'}" node3.local openshift_node_labels="{'logging':'true', 'cluster':'noper430', 'env':'lab'}" [new_nodes]
Removing a Node
A node is deleted using the master API, but before deleting it, all pods running on it must be evacuated. This is because when deleting a node with the API, the node is deleted from the cluster state, but the pods that exist on the node are to deleted. The pods not managed by replication controllers will become inaccessible. Pods backed by replication controllers will be rescheduled on other nodes, and the local manifest pods would need to be manually deleted.
Procedure to evacuate a pod:
Then delete the node object:
oc delete node <node-name>
Optionally, release the virtualization resources: storage volume, guest definition, etc.