Metrics in Kubernetes: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 53: Line 53:
{{External|https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md}}
{{External|https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md}}
The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/custom.metrics.k8s.io, so it is often referred to as the "custom.metrics.k8s.io" API.
The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/custom.metrics.k8s.io, so it is often referred to as the "custom.metrics.k8s.io" API.
<font color=darkgray>TODO https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md</font>


Monitoring systems like [[Prometheus]] expose application-specific metrics to the [[Kubernetes Horizontal Pod Autoscaler|Horizontal Pod Autoscaler]] controller via the Custom Metrics API.
Monitoring systems like [[Prometheus]] expose application-specific metrics to the [[Kubernetes Horizontal Pod Autoscaler|Horizontal Pod Autoscaler]] controller via the Custom Metrics API.

Revision as of 00:35, 13 October 2020

External

Internal

Overview

Application health monitoring, resource consumption monitoring and scaling decisions require metrics collection and analysis. Kubernetes facilitates metrics collection from containers, pods, services and the overall cluster via metric pipelines.

Metric Pipelines

A metric pipeline is a solution that allows metrics collection, propagation, possibly storage, and publishing. In Kubernetes, application monitoring does not depend on a single monitoring solution. Kubernetes allows for different types of metric pipelines: resource metrics pipelines and full metrics pipelines.

Resource Metrics Pipeline

https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/

A resource metrics pipeline provides a limited set of metrics that are consumed by the horizontal pod autoscaler or kubectl top utility. The metrics are collected from kubelet processes by the lightweight, short-term, in-memory metrics-server and exposed via the Resource Metrics API. Also see:

Kubernetes Metrics Server

Full Metrics Pipeline

https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline

A full metrics pipeline provides access to richer metrics than the resource metrics pipeline. The monitoring pipeline fetches metrics from the kubelet and then exposed them to Kubernetes via an adapter by implementing either the custom.metrics.k8s.io or external.metrics.k8s.io APIs. Prometheus can be used in the implementation of a full metrics pipeline.

Metrics

Metrics are exposed via a metrics API: resource metrics API, custom metrics API or external metrics API.

Resource Metrics

A resource metric is a numeric quantity that tracks either the CPU or memory consumed by containers and pods. The "resource" name comes from the fact that requests for such resources are declared "resources" section of the pod manifest. By default, the only two supported resource metrics are the CPU utilization and the memory consumed by a container. These resources do not change names from cluster to cluster and they should be available as long the Resource Metrics API is available. These metrics can be either accessed directly by user, for example by using kubectl top command, or used by a controller in the cluster, e.g. Horizontal Pod Autoscaler, to make decisions.

A Resource metric is designated by the 'Resource' type in the Horizontal Pod Autoscaler manifest.

CPU

CPU is reported as the average usage, in CPU cores, over a period of time. More details: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#cpu

Memory

Memory is reported as the working set, in bytes, at the instant the metric was collected. More details: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#memory

Resource Metrics API

https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md
https://github.com/kubernetes/metrics/blob/master/pkg/apis/metrics/v1beta1/types.go
https://github.com/kubernetes/metrics

The Resource Metrics API exposes the amount of resources currently used by a given node or pod. This API does not store the values. The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/metrics.k8s.io, so it is often referred to as the "metrics.k8s.io" API. The API is served at /metrics/resource/v1beta1 on the kubelet's authenticated and read-only ports. The Resource Metrics API requires the metrics server to be deployed.

Custom Metrics

Aside from resource metrics, there are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. Custom metrics track resources used by Kubernetes objects (pods or otherwise).

Pod Metrics

The Pod metrics, designated by the 'Pods' type in a Horizontal Pod Autoscaler manifest, are used to refer to any other (including custom) metric related to the pod directly. For example, if a pod happens to run a message queue, the number of messages in the broker's queue could be a custom pod metric. The essential difference from a Object metric, described below, is that metrics for all pods involved are read and averaged.

Object Metrics

Object metrics track resources describe different objects in the same namespace, instead of describing pods. For example, pods may be scaled according to a metric of another cluster object, such as an Ingress, such as average request latency. The essential difference from a Pod metric is that for an Object metric, the autoscaler does not need to pull values for all pods and average those, it only needs to read the metric from a single object. An Object metric is designated by the 'Object' type in the Horizontal Pod Autoscaler manifest.

Custom Metrics API

https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md

The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/custom.metrics.k8s.io, so it is often referred to as the "custom.metrics.k8s.io" API.

TODO https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md

Monitoring systems like Prometheus expose application-specific metrics to the Horizontal Pod Autoscaler controller via the Custom Metrics API.

External Metrics

External Metrics API

https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md

The API is discoverable through the same endpoint as the other Kubernetes APIs under /apis/external.metrics.k8s.io, so it is often referred to as the "external.metrics.k8s.io" API.

Sources of Metrics

kubelet

kubelet exposes metrics collected by cAdvisor:

kubelet Metrics Collection

Kubernetes Metrics Server

Kubernetes Metrics Server

Prometheus

Prometheus