OpenShift Network Plugins: Difference between revisions
(23 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=External= | =External= | ||
* https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/sdn.html | |||
* https://docs.openshift.com/container-platform/3.5/install_config/configuring_sdn.html | * https://docs.openshift.com/container-platform/3.5/install_config/configuring_sdn.html | ||
Line 8: | Line 9: | ||
=Overview= | =Overview= | ||
<font color=red>TODO: unify with the upper layer [[OpenShift_Concepts#SDN.2C_Overlay_Network]]</font> | |||
Pods get IP addresses from [[OpenShift_Concepts#The_Cluster_Network|the cluster network]], and the address allocation and packet routing is provided by a software-defined network (SDN), implemented using [[OpenShift_Concepts#Open_vSwitch|Open vSwitch]] (OVS). A specific behavior is provided by the SDN plug-in chosen at installation: [[#subnet|subnet]], [[#multitenant|multitenant]] and [[#networkpolicy|networkpolicy]]. | |||
<font color=red>TODO: network architecture, parse and incorporate: https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/sdn.html#sdn-design-on-masters</font> | |||
=SDN Plug-Ins= | =SDN Plug-Ins= | ||
= | ==subnet== | ||
The "ovs-subnet" plug-in provides a "flat" network: every pod in the cluster can communicate with every other pod and service, regardless of the project (namespace). | |||
Ansible configuration file: | Ansible configuration file: | ||
Line 19: | Line 27: | ||
os_sdn_network_plugin_name='redhat/openshift-ovs-subnet' | os_sdn_network_plugin_name='redhat/openshift-ovs-subnet' | ||
= | ==multitenant== | ||
The "ovs-multitenant" is a plug-in that provides [[#Network_Isolation|project-level network isolation]] for pods and services. Each project gets a unique [[#Virtual_Network_ID_.28VNID.29|Virtual Network ID (VNID)]] | |||
===Virtual Network ID (VNID)=== | |||
The Virtual Network ID (VNID) identifies traffic as being initiated by pods associated with a specific project. Pods from different projects cannot send or receive packets to/from pods and services of a different project, except for those that have VNID 0. VNID 0 allows the pods of the project to communicate with all other pods, and all other pods can communicate with them. | |||
The "[[OpenShift_Concepts#Default_Project|default]]" project has VNID 0. This allows the [[OpenShift Concepts#Router|router service]] to route packets between projects. | |||
The VNID are managed by the [[OpenShift_Concepts#The_Cluster_Network|masters]], which allocate them to projects when the projects are created. | |||
There is an oc command that allows displaying the VNID assigned to each project. The command reports the VNID as "NETID": | |||
[[OpenShift_Network_Operations#Information_about_Virtual_Networks_Assigned_to_Each_Project|oc get netnamespaces]] | |||
===Network Isolation=== | |||
Network isolation with ovs-multitenant is implemented as follows: when a packet exits a pod assigned to a non-default project, the OVS bridge [[OpenShift_Concepts#br0|br0]] tags the packet with the project's assigned VNID. If the packet is aimed at another IP address in the node's cluster subnet, the OVS bridge it only delivers it if the VNIDs match. If a packet is received from another node via the [[OpenShift_Concepts#vxlan|VXLAN tunnel]], the Tunnel ID is used as VNID, and the OVS bridge only allows the packet to be delivered to the local pod if the tunnel ID matches the destination pod's VNID. Packets aimed to other cluster subnets are tagged with their VNID and sent over the [[OpenShift_Concepts#vxlan|VXLAN tunnel]] with a tunnel destination address of the node owning the cluster subnet. VNID 0 is privileged, in that traffic with any VNID is allowed to enter any pod that belongs to VNID 0, and traffic with VNID 0 is allowed to enter any pod. | |||
===Configuration=== | |||
Ansible configuration file: | Ansible configuration file: | ||
Line 25: | Line 53: | ||
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant' | os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant' | ||
= | ==networkpolicy== | ||
Projects may configure their own isolation policies using [[#NetworkPolicy|NetworkPolicy]] objects. | |||
===NetworkPolicy=== | |||
=Operations= | |||
{{Internal| | {{Internal|OpenShift Network Operations|Network Operations}} |
Latest revision as of 22:33, 17 October 2017
External
- https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/sdn.html
- https://docs.openshift.com/container-platform/3.5/install_config/configuring_sdn.html
Internal
Overview
TODO: unify with the upper layer OpenShift_Concepts#SDN.2C_Overlay_Network
Pods get IP addresses from the cluster network, and the address allocation and packet routing is provided by a software-defined network (SDN), implemented using Open vSwitch (OVS). A specific behavior is provided by the SDN plug-in chosen at installation: subnet, multitenant and networkpolicy.
TODO: network architecture, parse and incorporate: https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/sdn.html#sdn-design-on-masters
SDN Plug-Ins
subnet
The "ovs-subnet" plug-in provides a "flat" network: every pod in the cluster can communicate with every other pod and service, regardless of the project (namespace).
Ansible configuration file:
os_sdn_network_plugin_name='redhat/openshift-ovs-subnet'
multitenant
The "ovs-multitenant" is a plug-in that provides project-level network isolation for pods and services. Each project gets a unique Virtual Network ID (VNID)
Virtual Network ID (VNID)
The Virtual Network ID (VNID) identifies traffic as being initiated by pods associated with a specific project. Pods from different projects cannot send or receive packets to/from pods and services of a different project, except for those that have VNID 0. VNID 0 allows the pods of the project to communicate with all other pods, and all other pods can communicate with them.
The "default" project has VNID 0. This allows the router service to route packets between projects.
The VNID are managed by the masters, which allocate them to projects when the projects are created.
There is an oc command that allows displaying the VNID assigned to each project. The command reports the VNID as "NETID":
oc get netnamespaces
Network Isolation
Network isolation with ovs-multitenant is implemented as follows: when a packet exits a pod assigned to a non-default project, the OVS bridge br0 tags the packet with the project's assigned VNID. If the packet is aimed at another IP address in the node's cluster subnet, the OVS bridge it only delivers it if the VNIDs match. If a packet is received from another node via the VXLAN tunnel, the Tunnel ID is used as VNID, and the OVS bridge only allows the packet to be delivered to the local pod if the tunnel ID matches the destination pod's VNID. Packets aimed to other cluster subnets are tagged with their VNID and sent over the VXLAN tunnel with a tunnel destination address of the node owning the cluster subnet. VNID 0 is privileged, in that traffic with any VNID is allowed to enter any pod that belongs to VNID 0, and traffic with VNID 0 is allowed to enter any pod.
Configuration
Ansible configuration file:
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
networkpolicy
Projects may configure their own isolation policies using NetworkPolicy objects.