Metrics in Kubernetes: Difference between revisions
(38 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=External= | =External= | ||
* https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ | * https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/ | ||
* https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md | |||
=Internal= | =Internal= | ||
Line 9: | Line 10: | ||
=Overview= | =Overview= | ||
Application health monitoring, resource consumption monitoring and scaling decisions require metrics | Application health monitoring, resource consumption monitoring and scaling decisions require metrics to be collected and analyzed. Kubernetes facilitates metrics collection from containers, pods, services and the overall cluster via [[#Metric_Pipeline|metric pipelines]]. | ||
=<span id='Metric_Pipeline'></span>Metric Pipelines= | =<span id='Metric_Pipeline'></span>Metric Pipelines= | ||
Line 17: | Line 18: | ||
==Resource Metrics Pipeline== | ==Resource Metrics Pipeline== | ||
{{External|https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/}} | {{External|https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/}} | ||
A resource metrics pipeline provides a limited set of metrics that are consumed by the [[Kubernetes Horizontal Pod Autoscaler| | A resource metrics pipeline provides a limited set of metrics - CPU and memory - that are consumed by the [[Kubernetes Horizontal Pod Autoscaler|Horizontal Pod Autoscaler]] or [[kubectl top]] utility. The metrics are collected from [[Kubelet#Metrics_Collection|kubelet processes]] by the lightweight, short-term, in-memory [[Kubernetes Metrics Server|metrics-server]] and exposed via the [[Metrics_in_Kubernetes#Resource_Metrics_API|Resource Metrics API]]. Also see: {{Internal|Kubernetes Metrics Server|Kubernetes Metrics Server}} | ||
==Full Metrics Pipeline== | ==Full Metrics Pipeline== | ||
{{External|https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline}} | {{External|https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline}} | ||
A full metrics pipeline provides access to richer metrics than the [[#Resource_Metrics_Pipeline|resource metrics pipeline]]. The monitoring pipeline fetches metrics from the [[kubelet#Metrics_Collection|kubelet]] and then exposed them to Kubernetes via an adapter by implementing either the [[#Custom_Metrics_API| | A full metrics pipeline provides access to richer metrics than the [[#Resource_Metrics_Pipeline|resource metrics pipeline]]. The monitoring pipeline fetches metrics from the [[kubelet#Metrics_Collection|kubelet]] or other sources and then exposed them to Kubernetes via an adapter by implementing either the [[#Custom_Metrics_API|Custom Metrics API]] or [[#External_Metrics_API|External Metrics API]] APIs. [[Prometheus]] can be used in the implementation of a full metrics pipeline. | ||
=<span id='Metric'></span>Metrics= | =<span id='Metric'></span>Metrics= | ||
Metrics are exposed via a metrics API: [[#Resource_Metrics_API| | Metrics are exposed via a metrics API: [[#Resource_Metrics_API|Resource Metrics API]], [[#Custom_Metrics_API|Custom Metrics API]] or [[#External_Metrics_API|External Metrics API]]. | ||
==<span id='Resource_Metric'></span>Resource Metrics== | ==<span id='Resource_Metric'></span>Resource Metrics== | ||
A resource metric is a numeric quantity that tracks either the [[#CPU|CPU]] or [[#Memory|memory]] consumed by containers and pods. The "resource" name comes from the fact that requests for such resources are declared [[Kubernetes_Pod_Manifest#resources|"resources" section of the pod manifest]]. By default, the only two supported resource metrics are the CPU utilization and the memory consumed by a container. These resources do not change names from cluster to cluster and they should be available as long the [[#Resource_Metrics_API|Resource Metrics API]] is available. These metrics can be either accessed directly by user, for example by using [[kubectl top]] command, or used by a controller in the cluster, e.g. [[Kubernetes Horizontal Pod Autoscaler|Horizontal Pod Autoscaler]], to make decisions. | A resource metric is a numeric quantity that tracks either the [[#CPU|CPU]] or [[#Memory|memory]] consumed by containers and pods. The "resource" name comes from the fact that requests for such resources are declared in the [[Kubernetes_Pod_Manifest#resources|"resources" section of the pod manifest]]. By default, the only two supported resource metrics are the CPU utilization and the memory consumed by a container. These resources do not change names from cluster to cluster and they should be available as long the [[#Resource_Metrics_API|Resource Metrics API]] is available. These metrics can be either accessed directly by user, for example by using [[kubectl top]] command, or used by a controller in the cluster, e.g. [[Kubernetes Horizontal Pod Autoscaler|Horizontal Pod Autoscaler]], to make scaling decisions. | ||
A Resource metric is | A Resource metric is configured in the Horizontal Pod Autoscaler by specifying the 'Resource' [[Kubernetes_Horizontal_Pod_Autoscaler_Manifest_Version_2_beta_2#type|type]] in the autoscaler's manifest. | ||
===CPU=== | ===CPU=== | ||
CPU is reported as the average usage, in [[Kubernetes_Resource_Management_Concepts#CPU_Usage|CPU cores]], over a period of time. More details: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#cpu | CPU is reported as the average usage, in [[Kubernetes_Resource_Management_Concepts#CPU_Usage|CPU cores]], over a period of time. This value is derived by taking a rate over a cumulative CPU counter provided by the kernel. The kubelet chooses the window for the rate calculation. More details: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#cpu | ||
===Memory=== | ===Memory=== | ||
Line 36: | Line 37: | ||
===Resource Metrics API=== | ===Resource Metrics API=== | ||
The Resource Metrics API exposes the amount of resources currently used by a given node or pod. This API does not store the values. The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/metrics.k8s.io, so it is often referred to as the "metrics.k8s.io" API. The API is served at /metrics/resource/v1beta1 on the [[kubelet#Metrics_Collection|kubelet]]'s authenticated and read-only ports. The Resource Metrics API requires the [[Kubernetes_Metrics_Server#Overview|metrics server]] to be deployed. | The Resource Metrics API exposes the amount of resources currently used by a given node or pod. Some documents refer to this API as to "master metrics API". This API does not store the values. The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/metrics.k8s.io, so it is often referred to as the "metrics.k8s.io" API. The API is served at /metrics/resource/v1beta1 on the [[kubelet#Metrics_Collection|kubelet]]'s authenticated and read-only ports. The Resource Metrics API requires the [[Kubernetes_Metrics_Server#Overview|metrics server]] to be deployed. | ||
<syntaxhighlight lang='text'> | |||
kubectl get --raw /apis/metrics.k8s.io/ | jq | |||
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq | |||
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes | jq | |||
</syntaxhighlight> | |||
Additional Resource Metrics API documentation: | |||
* https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md | |||
* https://github.com/kubernetes/metrics | |||
==<span id='Custom_Metric'></span>Custom Metrics== | ==<span id='Custom_Metric'></span>Custom Metrics== | ||
Aside from [[#Resource_Metric|resource metrics]], there are two other types of metrics, both of which are considered [[#Custom_Metric|custom metrics]]: [[#Pod_Metric|pod metrics]] and [[#Object_Metrics|object metrics]]. Custom metrics track resources used by Kubernetes objects (pods or otherwise). Autoscaling solutions based on custom metrics are natively supported by [[Kubernetes_Horizontal_Pod_Autoscaler_Manifest_Version_2_beta_2|v2beta2]] Horizontal Pod Autoscalers. | Aside from [[#Resource_Metric|resource metrics]], there are two other types of metrics, both of which are considered [[#Custom_Metric|custom metrics]]: [[#Pod_Metric|pod metrics]] and [[#Object_Metrics|object metrics]]. Custom metrics track resources used by Kubernetes objects (pods or otherwise). Autoscaling solutions based on custom metrics are natively supported by [[Kubernetes_Horizontal_Pod_Autoscaler_Manifest_Version_2_beta_2|v2beta2]] Horizontal Pod Autoscalers. | ||
===<span id='Pod_Metric'></span>Pod Metrics=== | ===<span id='Pod_Metric'></span>Pod Metrics=== | ||
The Pod metrics, designated by the 'Pods' [[Kubernetes_Horizontal_Pod_Autoscaler_Manifest_Version_2_beta_2#type|type]] in a Horizontal Pod Autoscaler manifest, are used to refer to any other (including custom) metric related to the pod directly. For example, if a pod happens to run a message queue, the number of messages in the broker's queue could be a custom pod metric. The essential difference from a [[#Object_Metric|Object]] metric, described below, is that metrics for all pods involved are read and averaged. | The Pod metrics, designated by the 'Pods' [[Kubernetes_Horizontal_Pod_Autoscaler_Manifest_Version_2_beta_2#type|type]] in a Horizontal Pod Autoscaler manifest, are used to refer to any other (including custom) metric related to the pod directly. For example, if a pod happens to run a message queue, the number of messages in the broker's queue could be a custom pod metric. The essential difference from a [[#Object_Metric|Object]] metric, described below, is that metrics for all pods involved are read and averaged. For an example of how the horizontal pod autoscaler can be configured to use a pod metric, see: {{Internal|Kubernetes_Horizontal_Pod_Autoscaler#Custom_Pod_Metrics|Configuring Custom Pod Metrics on Horizontal Pod Autoscaler}} | ||
===<span id='Object_Metric'></span>Object Metrics=== | ===<span id='Object_Metric'></span>Object Metrics=== | ||
Object metrics track resources describe different objects in the same namespace, instead of describing pods. For example, pods may be scaled according to a metric of another cluster object, such as an Ingress, such as average request latency. The essential difference from a [[#Pod_Metrics|Pod]] metric is that for an Object metric, the autoscaler does not need to pull values for all pods and average those, it only needs to read the metric from a single object. An Object metric is designated by the 'Object' [[Kubernetes_Horizontal_Pod_Autoscaler_Manifest_Version_2_beta_2#type|type]] in the Horizontal Pod Autoscaler manifest. | Object metrics track resources describe different objects in the same namespace, instead of describing pods. For example, pods may be scaled according to a metric of another cluster object, such as an Ingress, such as average request latency. The essential difference from a [[#Pod_Metrics|Pod]] metric is that for an Object metric, the autoscaler does not need to pull values for all pods and average those, it only needs to read the metric from a single object. An Object metric is designated by the 'Object' [[Kubernetes_Horizontal_Pod_Autoscaler_Manifest_Version_2_beta_2#type|type]] in the Horizontal Pod Autoscaler manifest. For an example of how the horizontal pod autoscaler can be configured to use a pod metric, see: {{Internal|Kubernetes_Horizontal_Pod_Autoscaler#Custom_Object_Metrics|Configuring Object Pod Metrics on Horizontal Pod Autoscaler}} | ||
===Custom Metrics API=== | ===Custom Metrics API=== | ||
The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/custom.metrics.k8s.io, so it is often referred to as the "custom.metrics.k8s.io" API. The custom metrics API is provided by "adapter" API servers provided by metrics solution vendors. An example of such extension API server is the [[Prometheus_Adapter_for_Kubernetes_Metrics_APIs|Prometheus adapter for Kubernetes Metrics APIs]]. Monitoring systems like Prometheus expose application-specific metrics to the [[Kubernetes Horizontal Pod Autoscaler|Horizontal Pod Autoscaler]] controller via the Custom Metrics API. | |||
The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/custom.metrics.k8s.io, so it is often referred to as the "custom.metrics.k8s.io" API. The custom metrics API is provided by "adapter" API servers provided by metrics solution vendors. | |||
< | The Horizontal Pod Autoscaler, as the consumer of the API, expects to find the Pod metrics at: | ||
<syntaxhighlight lang='text'> | |||
/apis/custom.metrics.k8s.io/v1beta1/namespaces/<namespace-name>/pods/*/<metric-name> | |||
</syntaxhighlight> | |||
More about the Prometheus Adapter for Kubernetes Metrics APIs: | |||
{{Internal|Prometheus Adapter for Kubernetes Metrics APIs|Prometheus Adapter for Kubernetes Metrics APIs}} | |||
Example of a Custom Metrics API server: | |||
{{Internal|Kubernetes Custom Metrics API Server|Custom Metrics API Server}} | |||
Additional Custom Metrics API documentation: | |||
* https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md | * https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md | ||
==<span id='External_Metric'></span>External Metrics== | ==<span id='External_Metric'></span>External Metrics== | ||
An external metric is a metric that do not have an obvious relationship to any object in Kubernetes, such as metrics describing a hosted service with no direct correlation to a Kubernetes namespace. When possible, it's preferable to use the custom metrics instead of external metrics, since it is easier for cluster administrators to secure the custom metrics API. The external metrics API potentially allows access to any metric, so cluster administrators should take care when exposing it. | {{External|https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects}} | ||
An external metric is a metric that do not have an obvious relationship to any object in Kubernetes, such as metrics describing a hosted service with no direct correlation to a Kubernetes namespace. When possible, it's preferable to use the custom metrics instead of external metrics, since it is easier for cluster administrators to secure the custom metrics API. The external metrics API potentially allows access to any metric, so cluster administrators should take care when exposing it. For an example of how the horizontal pod autoscaler can be configured to use a pod metric, see: {{Internal|Kubernetes_Horizontal_Pod_Autoscaler#External_Metrics-based_Scaling|Configuring External Metrics on Horizontal Pod Autoscaler}} | |||
===External Metrics API=== | ===External Metrics API=== | ||
The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/external.metrics.k8s.io, so it is often referred to as the "external.metrics.k8s.io" API. | The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/external.metrics.k8s.io, so it is often referred to as the "external.metrics.k8s.io" API. | ||
Additional External Metrics API documentation: | |||
* https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md | * https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md | ||
=Sources of Metrics= | =Sources of Metrics= | ||
==kubelet== | ==Kuberenetes Components== | ||
{{External|https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/}} | |||
Kubernetes components emit metrics in [[Prometheus_Concepts#Prometheus_Format|Prometheus format]]. In most cases metrics are available on /metrics endpoint of the HTTP server. | |||
===kubelet=== | |||
kubelet exposes metrics collected by [[kubelet#cAdvisor|cAdvisor]]: | kubelet exposes metrics collected by [[kubelet#cAdvisor|cAdvisor]]: | ||
{{Internal|Kubelet#Metrics_Collection|kubelet Metrics Collection}} | {{Internal|Kubelet#Metrics_Collection|kubelet Metrics Collection}} | ||
Line 89: | Line 96: | ||
==Prometheus== | ==Prometheus== | ||
{{Internal|Prometheus|Prometheus}} | {{Internal|Prometheus|Prometheus}} | ||
=Quantities= | |||
<font color=darkgray> | |||
TODO: | |||
* https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#appendix-quantities | |||
* https://kubernetes.io/docs/reference/glossary/?all=true#term-quantity | |||
</font> |
Latest revision as of 02:12, 17 October 2020
External
- https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/
- https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md
Internal
- Kubernetes Resource Management Concepts
- Kubernetes Concepts
- Kubernetes Autoscaling Concepts
- Prometheus
Overview
Application health monitoring, resource consumption monitoring and scaling decisions require metrics to be collected and analyzed. Kubernetes facilitates metrics collection from containers, pods, services and the overall cluster via metric pipelines.
Metric Pipelines
A metric pipeline is a solution that allows metrics collection, propagation, possibly storage, and publishing. In Kubernetes, application monitoring does not depend on a single monitoring solution. Kubernetes allows for different types of metric pipelines: resource metrics pipelines and full metrics pipelines.
Resource Metrics Pipeline
A resource metrics pipeline provides a limited set of metrics - CPU and memory - that are consumed by the Horizontal Pod Autoscaler or kubectl top utility. The metrics are collected from kubelet processes by the lightweight, short-term, in-memory metrics-server and exposed via the Resource Metrics API. Also see:
Full Metrics Pipeline
A full metrics pipeline provides access to richer metrics than the resource metrics pipeline. The monitoring pipeline fetches metrics from the kubelet or other sources and then exposed them to Kubernetes via an adapter by implementing either the Custom Metrics API or External Metrics API APIs. Prometheus can be used in the implementation of a full metrics pipeline.
Metrics
Metrics are exposed via a metrics API: Resource Metrics API, Custom Metrics API or External Metrics API.
Resource Metrics
A resource metric is a numeric quantity that tracks either the CPU or memory consumed by containers and pods. The "resource" name comes from the fact that requests for such resources are declared in the "resources" section of the pod manifest. By default, the only two supported resource metrics are the CPU utilization and the memory consumed by a container. These resources do not change names from cluster to cluster and they should be available as long the Resource Metrics API is available. These metrics can be either accessed directly by user, for example by using kubectl top command, or used by a controller in the cluster, e.g. Horizontal Pod Autoscaler, to make scaling decisions.
A Resource metric is configured in the Horizontal Pod Autoscaler by specifying the 'Resource' type in the autoscaler's manifest.
CPU
CPU is reported as the average usage, in CPU cores, over a period of time. This value is derived by taking a rate over a cumulative CPU counter provided by the kernel. The kubelet chooses the window for the rate calculation. More details: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#cpu
Memory
Memory is reported as the working set, in bytes, at the instant the metric was collected. More details: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#memory
Resource Metrics API
The Resource Metrics API exposes the amount of resources currently used by a given node or pod. Some documents refer to this API as to "master metrics API". This API does not store the values. The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/metrics.k8s.io, so it is often referred to as the "metrics.k8s.io" API. The API is served at /metrics/resource/v1beta1 on the kubelet's authenticated and read-only ports. The Resource Metrics API requires the metrics server to be deployed.
kubectl get --raw /apis/metrics.k8s.io/ | jq
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes | jq
Additional Resource Metrics API documentation:
- https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md
- https://github.com/kubernetes/metrics
Custom Metrics
Aside from resource metrics, there are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. Custom metrics track resources used by Kubernetes objects (pods or otherwise). Autoscaling solutions based on custom metrics are natively supported by v2beta2 Horizontal Pod Autoscalers.
Pod Metrics
The Pod metrics, designated by the 'Pods' type in a Horizontal Pod Autoscaler manifest, are used to refer to any other (including custom) metric related to the pod directly. For example, if a pod happens to run a message queue, the number of messages in the broker's queue could be a custom pod metric. The essential difference from a Object metric, described below, is that metrics for all pods involved are read and averaged. For an example of how the horizontal pod autoscaler can be configured to use a pod metric, see:
Object Metrics
Object metrics track resources describe different objects in the same namespace, instead of describing pods. For example, pods may be scaled according to a metric of another cluster object, such as an Ingress, such as average request latency. The essential difference from a Pod metric is that for an Object metric, the autoscaler does not need to pull values for all pods and average those, it only needs to read the metric from a single object. An Object metric is designated by the 'Object' type in the Horizontal Pod Autoscaler manifest. For an example of how the horizontal pod autoscaler can be configured to use a pod metric, see:
Custom Metrics API
The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/custom.metrics.k8s.io, so it is often referred to as the "custom.metrics.k8s.io" API. The custom metrics API is provided by "adapter" API servers provided by metrics solution vendors. An example of such extension API server is the Prometheus adapter for Kubernetes Metrics APIs. Monitoring systems like Prometheus expose application-specific metrics to the Horizontal Pod Autoscaler controller via the Custom Metrics API.
The Horizontal Pod Autoscaler, as the consumer of the API, expects to find the Pod metrics at:
/apis/custom.metrics.k8s.io/v1beta1/namespaces/<namespace-name>/pods/*/<metric-name>
More about the Prometheus Adapter for Kubernetes Metrics APIs:
Example of a Custom Metrics API server:
Additional Custom Metrics API documentation:
External Metrics
An external metric is a metric that do not have an obvious relationship to any object in Kubernetes, such as metrics describing a hosted service with no direct correlation to a Kubernetes namespace. When possible, it's preferable to use the custom metrics instead of external metrics, since it is easier for cluster administrators to secure the custom metrics API. The external metrics API potentially allows access to any metric, so cluster administrators should take care when exposing it. For an example of how the horizontal pod autoscaler can be configured to use a pod metric, see:
External Metrics API
The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/external.metrics.k8s.io, so it is often referred to as the "external.metrics.k8s.io" API.
Additional External Metrics API documentation:
Sources of Metrics
Kuberenetes Components
Kubernetes components emit metrics in Prometheus format. In most cases metrics are available on /metrics endpoint of the HTTP server.
kubelet
kubelet exposes metrics collected by cAdvisor:
Kubernetes Metrics Server
Prometheus
Quantities
TODO:
- https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#appendix-quantities
- https://kubernetes.io/docs/reference/glossary/?all=true#term-quantity