Events OS Metrics: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(3 intermediate revisions by the same user not shown)
Line 9: Line 9:
==CpuUserTime==
==CpuUserTime==


The percentage of total CPU time spent executing code in user mode.
The percentage of total CPU time spent executing code in user mode. By default, this metric is calculated based on subsequent values read from[[/proc/stat#user|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 17: Line 17:
==CpuNiceTime==
==CpuNiceTime==


The percentage of total CPU time spent executing code in user mode with low priority (nice).
The percentage of total CPU time spent executing code in user mode with low priority (nice). By default, this metric is calculated based on subsequent values read from[[/proc/stat#nice|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 23: Line 23:
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/CpuNiceTime.java GitHub CpuNiceTime]
* [https://github.com/NovaOrdis/events-api/blob/master/src/main/java/io/novaordis/events/api/metric/os/mdefs/CpuNiceTime.java GitHub CpuNiceTime]


==CpuKernelTime==
==CpuSystemTime==


The percentage of total CPU time spent executing system calls on behalf of processes.
The percentage of total CPU time spent executing system calls on behalf of processes. The metric was used to be named <span id='CpuKernelTime'></span>"CpuKernelTime", but the name was deprecated.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#system|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 33: Line 33:
==CpuIdleTime==
==CpuIdleTime==


The percentage of total CPU time spent in idle mode.
The percentage of total CPU time spent in idle mode.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#idle|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 41: Line 41:
==CpuIoWaitTime==
==CpuIoWaitTime==


The percentage of total CPU time spent waiting for I/O to complete. The CPU will not wait for IO, it will be schedule onto another task or will enter idle state. When a CPU goes into idle state for outstanding task I/O, another task will be scheduled on this CPU. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to calculate.
The percentage of total CPU time spent waiting for I/O to complete. The CPU will not wait for IO, it will be schedule onto another task or will enter idle state. When a CPU goes into idle state for outstanding task I/O, another task will be scheduled on this CPU. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to calculate.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#iowait|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 49: Line 49:
==CpuHardwareInterruptTime==
==CpuHardwareInterruptTime==


The percentage of total CPU time spent servicing hardware interrupts.
The percentage of total CPU time spent servicing hardware interrupts.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#irq|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 57: Line 57:
==CpuSoftwareInterruptTime==
==CpuSoftwareInterruptTime==


The percentage of total CPU time spent servicing software interrupts.
The percentage of total CPU time spent servicing software interrupts.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#softirq|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 65: Line 65:
==CpuStolenTime==
==CpuStolenTime==


The percentage of total CPU time spent in other operating systems when running in a virtualized environment. More details: {{Internal|Linux_Virtualization_Concepts#Steal_Time|Steal Time}}
The percentage of total CPU time spent in other operating systems when running in a virtualized environment. More details: {{Internal|Linux_Virtualization_Concepts#Steal_Time|Steal Time}}.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#steal|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.


Also see:  
Also see:  
Line 73: Line 73:
==CpuGuestTime==
==CpuGuestTime==


The percentage of total CPU time spent running a virtual CPU for guest operating systems under the control of the Linux kernel.
The percentage of total CPU time spent running a virtual CPU for guest operating systems under the control of the Linux kernel.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#guest|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  
Line 80: Line 80:
==CpuGuestNiceTime==
==CpuGuestNiceTime==


The percentage of total CPU time spent servicing software interrupts.
The percentage of total CPU time spent running a virtual CPU for niced guest operating systems under the control of the Linux kernel.  By default, this metric is calculated based on subsequent values read from[[/proc/stat#guest_nice|/proc/stat]] during two subsequent collections. It is also  possible to get the value from the top command output, but this has to be configured explicitly.  


Also see:  
Also see:  

Latest revision as of 14:17, 22 September 2017

Internal

Overview

CPU Metrics

CpuUserTime

The percentage of total CPU time spent executing code in user mode. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuNiceTime

The percentage of total CPU time spent executing code in user mode with low priority (nice). By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuSystemTime

The percentage of total CPU time spent executing system calls on behalf of processes. The metric was used to be named "CpuKernelTime", but the name was deprecated. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuIdleTime

The percentage of total CPU time spent in idle mode. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuIoWaitTime

The percentage of total CPU time spent waiting for I/O to complete. The CPU will not wait for IO, it will be schedule onto another task or will enter idle state. When a CPU goes into idle state for outstanding task I/O, another task will be scheduled on this CPU. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to calculate. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuHardwareInterruptTime

The percentage of total CPU time spent servicing hardware interrupts. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuSoftwareInterruptTime

The percentage of total CPU time spent servicing software interrupts. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuStolenTime

The percentage of total CPU time spent in other operating systems when running in a virtualized environment. More details:

Steal Time

. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuGuestTime

The percentage of total CPU time spent running a virtual CPU for guest operating systems under the control of the Linux kernel. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

CpuGuestNiceTime

The percentage of total CPU time spent running a virtual CPU for niced guest operating systems under the control of the Linux kernel. By default, this metric is calculated based on subsequent values read from/proc/stat during two subsequent collections. It is also possible to get the value from the top command output, but this has to be configured explicitly.

Also see:

Memory Metrics

PhysicalMemoryFree

The amount of physical RAM, left unused by the system.

Also see:

PhysicalMemoryTotal

Total amount of usable RAM, which is the amount of physical RAM installed on the system minus a number of reserved bits and the kernel binary code.

Also see:

PhysicalMemoryUsed

The amount of physical memory used by processes. It is calculated with the formula:

Used Physical Memory = MemTotal - MemFree - Buffers - Cached

Also see:

Swap Metrics

SwapFree

The total amount of free swap.

Also see:

SwapTotal

The total amount of swap available.

Also see:

SwapUsed

The amount of swap used by processes can be approximated with the formula:

Used Swap = SwapTotal - SwapFree

Also see:

Load Average Metrics

LoadAverageLastMinute

Also see:

LoadAverageLastFiveMinutes

Also see:

LoadAverageLastTenMinutes

Also see: