OpenShift Resource Management Concepts: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
No edit summary
 
(5 intermediate revisions by the same user not shown)
Line 8: Line 8:


* [[OpenShift Concepts#Resource_Management|OpenShift Concepts]]
* [[OpenShift Concepts#Resource_Management|OpenShift Concepts]]
* [[Linux Resource Management]]
* [[Kubernetes Resource Management Concepts]]


=Overview=
=Overview=
Line 70: Line 72:


  oc get limits [-n ''project-name'']
  oc get limits [-n ''project-name'']
=Compute Resources=
Compute resource requests and limits apply to pods - the pod definition may specify these as an indication to the scheduler of how the pods can be best placed on nodes, to achieve satisfactory performance.
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: ''container-name''
      resources:
        requests:
          [[#CPU_Request|cpu]]: 500m
          [[#Memory_Request|memory]]: 100Mi
        limits:
          [[#CPU_Limit|cpu]]: 1000m
          [[#Memory_Limit|memory]]: 500Mi
==CPU Usage==
The CPU usage is measured in ''millicores'', a thousandth of a CPU.
===CPU Request===
The amount of CPU a pod needs to execute. A pod will not be scheduled on a node that does not have at least "requests.cpu" available. Once scheduled, if there is no contention for CPU, the pod is allowed to use all available CPU on the node. If there is CPU contention, "requests.cpu" amount will be used to calculate a relative weight across all containers on the system for how much CPU the container may use. CPU requests map to Kernel CFS shares to enforce this behavior.
===CPU Limit===
CPU limit specifies the maximum amount of CPU the container may use, independent on the contention on a node. If the container attempts to exceed the specified limit, the system will throttle the container.
==Memory Usage==
Memory is measured in bytes, but multipliers (K/Ki, M/Mi, G/Gi, T/Ti, P/Pi, E/Ei) can also be used. Ki/Mi/Gi/Ti/P/Ei represent the power of two multipliers.
===Memory Request===
By default, a container is allowed to consume as much memory on the node as possible. However, a pod may elect to request a minimal amount of memory guaranteed memory by specifying "requests.memory", and this will instruct the scheduler to only place the pod on a node that has at least that amount of free memory. "requests.memory" still allows a pod to consume as much memory as possible on the node.
===Memory Limit===
"limits.memory" specifies the upper bound of the amount of memory the container will be allowed to use. If the container exceeds the specified memory limit, it will be terminated, and potentially restarted dependent upon the container restart policy.
"limits.memory" propagates as [[Docker_Container_Downward_API#memory.limit_in_bytes|/sys/fs/cgroup/memory/memory.limit_in_bytes]] in container.
==Quality of Service==
A compute resource is classified with a ''quality of service'' (QoS) attribute depending on the request and limit values used to request it. A container may have different quality of service for each computing resource.
===BestEffort===
The resource is provided when no request or limit is specified. A BestEffort CPU container is able to consume as much CPU as it is available on the node, but runs with the lowest priority. A BestEffort memory container is able to consume as much memory is available on the node, but there is no guarantee that the scheduler will place the container on a node with enough memory. In addition, BestEffort containers has the greatest chance of being killed if there is an out of memory event on the node.
===Burstable===
The resource is provided when a "request" value is specified, and it is less than an optionally specified limit. A Burstable CPU container is guaranteed to get the minimum amount of CPU requested, but it may or may not get additional CPU time. Excess CPU resources are distributed based on the amount request across all containers on the node. A Burstable memory container will get the amount of memory requested, but it may consume more. If there is an out of memory event on the node, Burstable containers are killed after the BestEffort containers when attempting to recover memory.
===Guaranteed===
The resource is provided when both "request" and "limit" are specified and they are equal. A Guaranteed CPU container is guaranteed to get the amount requested and no more, even if there is additional CPU available. A Guaranteed memory container gets the amount of memory requested, but no more. If an out of memory event occurs, it will only be kidded if there are no more BestEffort or Burstable containers on the system.


=Opaque Integer Resources=
=Opaque Integer Resources=
Line 141: Line 84:


{{External|https://access.redhat.com/documentation/en-us/openshift_container_platform/3.7/html/cluster_administration/admin-guide-handling-out-of-resource-errors}}
{{External|https://access.redhat.com/documentation/en-us/openshift_container_platform/3.7/html/cluster_administration/admin-guide-handling-out-of-resource-errors}}
=Access to Resource Specification from Pods=
...
        - env:
          - name: FLUENTD_CPU_LIMIT
            valueFrom:
              resourceFieldRef:
                containerName: fluentd-elasticsearch
                divisor: "0"
                resource: limits.cpu
          - name: FLUENTD_MEMORY_LIMIT
            valueFrom:
              resourceFieldRef:
                containerName: fluentd-elasticsearch
                divisor: "0"
                resource: limits.memory
...

Latest revision as of 18:29, 25 August 2020

External

Internal

Overview

OpenShift provides API-level support for establishing and enforcing resource quotas. The runtime monitors resource usage and intervenes when the quotas are reached or exceeded. Resource quotas can be set up and managed on the following type of resources: the quantity of objects that can be created per project, the amount of compute resources (requests and limits on CPU and memory) consumed by the project and individually by project entities such as pods and containers, the amount of storage consumed by project entities. Opaque integer resources can also be set and monitored.

Resource monitoring and consumption is important because it insures that no projects are using more resources that is appropriate for the cluster size. Primarily, resource constraints are set by the cluster administrators, but developers can also set request and limits on compute resources.

Quota

A resource quota specifies constraints that limit aggregate resource consumption per project, and they are set by cluster administrators. The resource quota for a project is defined by a ResourceQuota object. Resource quotas per cluster can be managed with ClusterResourceQuota. The resource quota limits:

  • the quantity of objects, per type, that can be created in a project:
    • Pods ("pods") - the total number of pods in a non-terminal state.
    • ConfigMaps ("configmaps")
    • Replication Controllers ("replicationcontrollers")
    • Secrets ("secrets")
    • Services ("services")
    • Image Streams ("openshift.io/imagestreams")
    • Resource Quotas ("resourcequotas")
  • the total amount of compute resources consumed by the project. Note that if a quota has a value specified for "requests.cpu" or "requests.memory", then it requires that every incoming container make an explicit request for those resources. The same rule applies for "limits.cpu" and "limits.memory":
  • the total amount of storage consumed by the project:
    • Persistent Volume Claims ("persistentvolumeclaims")
    • "requests.storage" - across all persistent volume claims in the project, the sum of storage requests cannot exceed this value.
    • "gold.storageclass.storage.k8s.io/persistentvolumeclaims", "gold.storageclass.storage.k8s.io/requests.storage"
    • "silver.storageclass.storage.k8s.io/persistentvolumeclaims", "silver.storageclass.storage.k8s.io/requests.storage"
    • "bronze.storageclass.storage.k8s.io/persistentvolumeclaims", "bronze.storageclass.storage.k8s.io/requests.storage"

Quota Scope

The compute resource quotas can be restricted only to pods in a non-terminal state within a certain scope. Each quota can have an associated set of scopes, and the quota will only measure usage for a resource if it matches the intersection of enumerated scopes. The scope can designate the pod's quality of service ("BestEffort", "NotBestEffort"), or type ("NonTerminating", "Terminating").

apiVersion: v1
kind: ResourceQuota
spec:
  ...
  scopes:
  - BestEffort 2

A "BestEffort" scope restricts quota to limiting the number of pods ("pods").

A "Terminating", "NotTerminating" and "NotBestEffort" scope restricts a quota to tracking the following resources: "pods", "requests.memory"/"memory", "limits.memory" "requests.cpu"/"cpu", "limits.cpu".

Quota Enforcement

After a quota is first declared on a project, the system restricts the ability to create new resources that may exceed the quota until usage statistics are calculated. Once the usage statistics are updated, content can be created or modified, but only if by doing so quota is not exceeded. If the quota is exceeded, the action will be denied and an error message will be returned. The quota will be modified immediately after creation/update. When a resource is deleted, the quota is decremented during the next full recalculation of quota statistics per project.

Quota Operations

All quotas for a project can be obtained with oc get quota, and information about individual quotas can be obtained with oc describe quota. Quotas can also be viewed from the web console, in the project's "Quota" page. Quotas can be created with oc create.

Limit Range

https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-limit-ranges

A limit range enumerates compute resource constraints in a project a the pod/container/image/image stream/persistent volume claim level, and specifies the amount of resources that a pod/container/image/image stream/persistent volume claim can consume. Limit ranges are defined by the LimitRange object. All resource creation and modification requests are evaluated against each limit range in the project and rejected if the resource request is outside limits. If the resource does not set an explicit value, and if the constraint supports a default value, the default value is applied to the resource. Limit ranges are set by the cluster administrators and are project-scoped.

The limits are accessed with:

oc get limits [-n project-name]

Opaque Integer Resources

https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#opaque-integer-resources-dev

Compute Resources for System Daemons

https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/

Handling Out of Resource Events

https://access.redhat.com/documentation/en-us/openshift_container_platform/3.7/html/cluster_administration/admin-guide-handling-out-of-resource-errors

Access to Resource Specification from Pods

...
       - env:
         - name: FLUENTD_CPU_LIMIT
           valueFrom:
             resourceFieldRef:
               containerName: fluentd-elasticsearch
               divisor: "0"
               resource: limits.cpu
         - name: FLUENTD_MEMORY_LIMIT
           valueFrom:
             resourceFieldRef:
               containerName: fluentd-elasticsearch
               divisor: "0"
               resource: limits.memory
...