OpenShift Resource Management Concepts
External
- https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-guide-compute-resources
- https://docs.openshift.com/container-platform/latest/admin_guide/quota.html
Internal
Overview
OpenShift provides API-level support for establishing and enforcing resource quotas. The runtime monitors resource usage and intervenes when the quotas are reached or exceeded. Resource quotas can be set up and managed on the following type of resources: the quantity of objects that can be created per project, the amount of compute resources (requests and limits on CPU and memory) consumed by the project and individually by project entities such as pods and containers, the amount of storage consumed by project entities. Opaque integer resources can also be set and monitored.
Resource monitoring and consumption is important because it insures that no projects are using more resources that is appropriate for the cluster size. Primarily, resource constraints are set by the cluster administrators, but developers can also set request and limits on compute resources.
Quota
A resource quota specifies constraints that limit aggregate resource consumption per project, and they are set by cluster administrators. The resource quota for a project is defined by a ResourceQuota object. Resource quotas per cluster can be managed with ClusterResourceQuota. The resource quota limits:
- the quantity of objects, per type, that can be created in a project:
- Pods ("pods") - the total number of pods in a non-terminal state.
- ConfigMaps ("configmaps")
- Replication Controllers ("replicationcontrollers")
- Secrets ("secrets")
- Services ("services")
- Image Streams ("openshift.io/imagestreams")
- the total amount of compute resources consumed by the project:
- CPU Requests ("cpu" and "requests.cpu" are equivalent) - the sum of CPU requests across all pods in a non-terminal state.
- CPU Limits ("limits.cpu") - the sum of CPU limits across all pods in a non-terminal state.
- Memory Requests ("memory" and "requests.memory" are equivalent) - the sum of memory requests across all pods in a non-terminal state.
- Memory Limits ("limits.memory") - the sum of CPU limits across all pods in a non-terminal state.
- the total amount of storage consumed by the project:
- Persistent Volume Claims ("persistentvolumeclaims")
- "requests.storage" - across all persistent volume claims in the project, the sum of storage requests cannot exceed this value.
- "gold.storageclass.storage.k8s.io/persistentvolumeclaims", "gold.storageclass.storage.k8s.io/requests.storage"
- "silver.storageclass.storage.k8s.io/persistentvolumeclaims", "silver.storageclass.storage.k8s.io/requests.storage"
- "bronze.storageclass.storage.k8s.io/persistentvolumeclaims", "bronze.storageclass.storage.k8s.io/requests.storage"
Quota Scope
The compute resource quotas can be restricted only to pods in a non-terminal state within a certain scope. The scope can designate the pod's quality of service ("BestEffort", "NotBestEffort"), or type ("NonTerminating", "Terminating").
apiVersion: v1 kind: ResourceQuota spec: ... scopes: - BestEffort 2
Quota Operations
All quotas for a project can be obtained with oc get quota, and information about individual quotas can be obtained with oc describe quota. Quotas can also be viewed from the web console, in the project's "Quota" page. Quotas can be created with oc create.
Limit Range
CPU Usage
CPU Request
CPU Limit
Memory Usage
Memory Request
Memory Limit
Quality of Service
BestEffort
Burstable
Guaranteed
Opaque Integer Resources
Resource Consumption Enforcement
After a quota is first declared on a project, the system restricts the ability to create new resources that may exceed the quota until usage statistics are calculated. If the resource creation request exceeds the quota, the server will deny the action and will return an error message.
When a resource is created, the quota usage is updated immediately. When a resource is delete, the quota usage is updated during the next full per-project statistics update.
Organizatorium
Enforcement
With cgroups?
Limits
Memory Limit
Propagates as /sys/fs/cgroup/memory/memory.limit_in_bytes in container.