Events OS Metrics: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(22 intermediate revisions by the same user not shown)
Line 9: Line 9:
==CpuUserTime==
==CpuUserTime==


The percentage of total CPU time spent executing code in user mode.
The percentage of total CPU time spent executing code in user mode. By default, this metric is calculated based on subsequent values read from[[/proc/stat#user|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 17: Line 17:
==CpuNiceTime==
==CpuNiceTime==


The percentage of total CPU time spent executing code in user mode with low priority (nice).
The percentage of total CPU time spent executing code in user mode with low priority (nice). By default, this metric is calculated based on subsequent values read from[[/proc/stat#nice|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 23: Line 23:
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/CpuNiceTime.java GitHub CpuNiceTime]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/CpuNiceTime.java GitHub CpuNiceTime]


==CpuKernelTime==
==CpuSystemTime==


The percentage of total CPU time spent executing system calls on behalf of processes.
The percentage of total CPU time spent executing system calls on behalf of processes. The metric was used to be named <span id='CpuKernelTime'></span>"CpuKernelTime", but the name was deprecated.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#system|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 33: Line 33:
==CpuIdleTime==
==CpuIdleTime==


The percentage of total CPU time spent in idle mode.
The percentage of total CPU time spent in idle mode.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#idle|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 41: Line 41:
==CpuIoWaitTime==
==CpuIoWaitTime==


The percentage of total CPU time spent waiting for I/O to complete. The CPU will not wait for IO, it will be schedule onto another task or will enter idle state. When a CPU goes into idle state for outstanding task I/O, another task will be scheduled on this CPU. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to calculate.
The percentage of total CPU time spent waiting for I/O to complete. The CPU will not wait for IO, it will be schedule onto another task or will enter idle state. When a CPU goes into idle state for outstanding task I/O, another task will be scheduled on this CPU. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to calculate.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#iowait|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 49: Line 49:
==CpuHardwareInterruptTime==
==CpuHardwareInterruptTime==


The percentage of total CPU time spent servicing hardware interrupts.
The percentage of total CPU time spent servicing hardware interrupts.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#irq|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 57: Line 57:
==CpuSoftwareInterruptTime==
==CpuSoftwareInterruptTime==


The percentage of total CPU time spent servicing software interrupts.
The percentage of total CPU time spent servicing software interrupts.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#softirq|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 65: Line 65:
==CpuStolenTime==
==CpuStolenTime==


The percentage of total CPU time spent in other operating systems when running in a virtualized environment.
The percentage of total CPU time spent in other operating systems when running in a virtualized environment. More details: {{Internal|Linux_Virtualization_Concepts#Steal_Time|Steal Time}}.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#steal|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 71: Line 71:
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/CpuStolenTime.java GitHub CpuStolenTime]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/CpuStolenTime.java GitHub CpuStolenTime]


=Memory Metrics=
==CpuGuestTime==
 
The percentage of total CPU time spent running a virtual CPU for guest operating systems under the control of the Linux kernel.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#guest|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.
 
Also see:
* [[/proc/stat#guest|/proc/stat cpu guest time reading]]
 
==CpuGuestNiceTime==


==PhysicalMemoryUsed==
The percentage of total CPU time spent running a virtual CPU for niced guest operating systems under the control of the Linux kernel.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#guest_nice|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.


Also see:  
Also see:  
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/PhysicalMemoryUsed.java GitHub PhysicalMemoryUsed]
* [[/proc/stat#guest_nice|/proc/stat cpu guest_nice time reading]]
 
=Memory Metrics=


==PhysicalMemoryFree==
==PhysicalMemoryFree==
The amount of physical RAM, left unused by the system.


Also see:  
Also see:  
* [[Proc-meminfo#MemFree|/proc/meminfo MemFree]]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/PhysicalMemoryFree.java GitHub PhysicalMemoryFree]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/PhysicalMemoryFree.java GitHub PhysicalMemoryFree]


==PhysicalMemoryTotal==
==PhysicalMemoryTotal==
Total amount of usable RAM, which is the amount of physical RAM installed on the system minus a number of reserved bits and the kernel binary code.


Also see:  
Also see:  
* [[Proc-meminfo#MemTotal|/proc/meminfo MemTotal]]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/PhysicalMemoryTotal.java GitHub PhysicalMemoryTotal]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/PhysicalMemoryTotal.java GitHub PhysicalMemoryTotal]


=Swap Metrics=
==PhysicalMemoryUsed==
 
The amount of physical memory used by processes. It is calculated with the formula:


==SwapUsed==
Used Physical Memory = MemTotal - MemFree - Buffers - Cached


Also see:  
Also see:  
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/SwapUsed.java GitHub SwapUsed]
* [[Proc-meminfo#Physical_Memory_Used_by_Processes|/proc/meminfo physical memory used by processes]]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/PhysicalMemoryUsed.java GitHub PhysicalMemoryUsed]
 
=Swap Metrics=


==SwapFree==
==SwapFree==
The total amount of free swap.


Also see:  
Also see:  
* [[Proc-meminfo#SwapFree|/proc/meminfo SwapFree]]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/SwapFree.java GitHub SwapFree]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/SwapFree.java GitHub SwapFree]


==SwapTotal==
==SwapTotal==
The total amount of swap available.


Also see:  
Also see:  
* [[Proc-meminfo#SwapTotal|/proc/meminfo SwapTotal]]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/SwapTotal.java GitHub SwapTotal]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/SwapTotal.java GitHub SwapTotal]
==SwapUsed==
The amount of swap used by processes can be approximated with the formula:
Used Swap = SwapTotal - SwapFree
Also see:
* [[Proc-meminfo#The_Total_Amount_of_Used_Swap|/proc/meminfo total amount of used swap]]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/SwapUsed.java GitHub SwapUsed]


=Load Average Metrics=
=Load Average Metrics=
Line 111: Line 147:
Also see:  
Also see:  
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/LoadAverageLastMinute.java GitHub LoadAverageLastMinute]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/LoadAverageLastMinute.java GitHub LoadAverageLastMinute]


==LoadAverageLastFiveMinutes==
==LoadAverageLastFiveMinutes==
Line 117: Line 152:
Also see:  
Also see:  
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/LoadAverageLastFiveMinutes.java GitHub LoadAverageLastFiveMinutes]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/LoadAverageLastFiveMinutes.java GitHub LoadAverageLastFiveMinutes]


==LoadAverageLastTenMinutes==
==LoadAverageLastTenMinutes==

Latest revision as of 14:17, 22 September 2017

Internal

Overview

CPU Metrics

CpuUserTime

The percentage of total CPU time spent executing code in user mode. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuNiceTime

The percentage of total CPU time spent executing code in user mode with low priority (nice). By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuSystemTime

The percentage of total CPU time spent executing system calls on behalf of processes. The metric was used to be named "CpuKernelTime", but the name was deprecated. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuIdleTime

The percentage of total CPU time spent in idle mode. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuIoWaitTime

The percentage of total CPU time spent waiting for I/O to complete. The CPU will not wait for IO, it will be schedule onto another task or will enter idle state. When a CPU goes into idle state for outstanding task I/O, another task will be scheduled on this CPU. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to calculate. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuHardwareInterruptTime

The percentage of total CPU time spent servicing hardware interrupts. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuSoftwareInterruptTime

The percentage of total CPU time spent servicing software interrupts. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuStolenTime

The percentage of total CPU time spent in other operating systems when running in a virtualized environment. More details:

Steal Time

. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuGuestTime

The percentage of total CPU time spent running a virtual CPU for guest operating systems under the control of the Linux kernel. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuGuestNiceTime

The percentage of total CPU time spent running a virtual CPU for niced guest operating systems under the control of the Linux kernel. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

Memory Metrics

PhysicalMemoryFree

The amount of physical RAM, left unused by the system.

Also see:

PhysicalMemoryTotal

Total amount of usable RAM, which is the amount of physical RAM installed on the system minus a number of reserved bits and the kernel binary code.

Also see:

PhysicalMemoryUsed

The amount of physical memory used by processes. It is calculated with the formula:

Used Physical Memory = MemTotal - MemFree - Buffers - Cached

Also see:

Swap Metrics

SwapFree

The total amount of free swap.

Also see:

SwapTotal

The total amount of swap available.

Also see:

SwapUsed

The amount of swap used by processes can be approximated with the formula:

Used Swap = SwapTotal - SwapFree

Also see:

Load Average Metrics

LoadAverageLastMinute

Also see:

LoadAverageLastFiveMinutes

Also see:

LoadAverageLastTenMinutes

Also see: