- 1 External
- 2 Internal
- 3 TODO
- 4 Overview
- 5 cgroups Subsystems
- 5.1 blkio
- 5.2 cpu
- 5.3 cpuacct
- 5.4 cpuset
- 5.5 devices
- 5.6 freezer
- 5.7 memory
- 5.8 net_cls
- 5.9 net_prio
- 5.10 ns
- 5.11 perf_event
- 6 Operations
- The relationship between cgroups settings and their parents' setting. How does a hierarchy translate into effective values.
cgroups is a Linux kernel feature that allows allocation of resources (CPU, system memory, network bandwidth, or a combination of these) among user-defined groups of processes running on the system. cgroups stands for "Consistency Groups". cgroups not only track groups of processes, but they also expose metrics about CPU, memory and block I/O usage.
cgroups are exposed through a pseudo-filesystem available at /sys/fs/cgroup (older systems expose it at /cgroup). The sub-directories of the cgroup pseudo-filesystem root correspond to different cgroups hierarchies: cpu, freezer, blkio.
This command returns a list of the cgroups that are mounted:
cat /proc/mounts | grep cgroup
The control groups subsystems known to the system are available in /proc/cgroups:
#subsys_name hierarchy num_cgroups enabled cpuset 6 13 1 cpu 4 89 1 cpuacct 4 89 1 memory 8 89 1 devices 3 83 1 freezer 10 13 1 net_cls 5 13 1 blkio 11 89 1 perf_event 2 13 1 hugetlb 9 13 1 pids 7 13 1 net_prio 5 13 1
cgroups are organized hierarchically, child cgroups inheriting certain attributes from their parent group. Many different hierarchies of cgroups can exist simultaneously on a system. Each hierarchy is attached to one or more subsystem, where a subsystem represents a single resource like CPU time or memory.
To figure out what cgroups a process belongs to, look at /proc/<pid>/cgroup: the cgroup is shown as a path relative to the root of the hierarchy mount point. "/" means the process has not been assigned to a group, while "/lxc/something" means the process is member of a container named "something".
cgroups can be configured via the cgconfig service.
These subsystems are also known as "controllers":
Sets limits on input/output access from and to block devices.
Uses the scheduler to provide cgroup tasks access to the CPU. Usually, the access to CPU is scheduled using the CFS scheduler, and the control parameters make that obvious by using "cfs" in their name. The RT scheduler is also available.
cgroups can be used to control two things:
- the relative share of CPU time to be allocated to the tasks in the cgroup.
- ceiling enforcement: a hard limit of the amount of CPU a cgroup can utilize, to prevent the "noisy neighbor" problem, given the fact that arbitrary processes can use all available CPU if they are allowed to.
The relative share of CPU to be allocated to the tasks in a cgroup can be controlled with an integer value specified in the "cpu.shares" file of the cgroup. The integer value specifies the share of CPU time available to the tasks in the cgroup, relative to all other tasks being scheduled at the same time. For example, tasks in two cgroups that have cpu.shares set to 100 will receive equal CPU time, but tasks in a cgroup that has cpu.shares set to 200 receive twice the CPU time of tasks in a cgroup where cpu.shares is set to 100. However, the exact value of that time depends on who else is consuming CPU at that time. The value specified in the cpu.shares file must be 2 or higher.
In case of a multi-core system, shares specified in "cpu.shares" are distributed across all CPU cores of the system.
As a consequence of how CFS works, is difficult to predict how much actually CPU an arbitrary task will get, because that depends on the actual process population on a node, and what they actually do.
Controlling CPU Throttling
cgroups allow to put a limit on the amount of CPU cycles allocated to a cgroup, to help preventing the "noisy neighbor" problem - processes that consume as much CPU as they can get, reducing everyone else's share.
Specifies a period of time in microseconds (µs, represented here as "us") for how regularly a cgroup's access to CPU resources should be reallocated. If tasks in a cgroup should be able to access a single CPU for 0.2 seconds out of every 1 second, set cpu.cfs_quota_us to 200,000 and cpu.cfs_period_us to 1,000,000. The upper limit of the cpu.cfs_quota_us parameter is 1 second and the lower limit is 1,000 microseconds.
Specifies the total amount of time in microseconds (µs, represented here as "us") for which all tasks in a cgroup can run during one period (as defined by cpu.cfs_period_us). As soon as tasks in a cgroup use up all the time specified by the quota, they are throttled for the remainder of the time specified by the period and not allowed to run until the next period. If tasks in a cgroup should be able to access a single CPU for 0.2 seconds out of every 1 second, set cpu.cfs_quota_us to 200,000 and cpu.cfs_period_us to 1,000,000. Note that the quota and period parameters operate on a CPU basis. To allow a process to fully utilize two CPUs, for example, set cpu.cfs_quota_us to 200,000 and cpu.cfs_period_us to 100,000.
Setting the value in cpu.cfs_quota_us to -1 indicates that the cgroup does not adhere to any CPU time restrictions. This is also the default value for every cgroup (except the root cgroup).
- nr_periods: number of period intervals (as specified in cpu.cfs_period_us) that have elapsed.
- nr_throttled: number of times tasks in a cgroup have been throttled (that is, not allowed to run because they have exhausted all of the available time as specified by their quota).
- throttled_time: the total time duration (in nanoseconds) for which tasks in a cgroup have been throttled.
Generates automatic reports on CPU resources.
"user" time is the amount of time a process has direct control of the CPU, executing process code. Also see /proc/stat cpu.
"system" time is the time the kernel is executing system calls on behalf of the process. Also see /proc/stat cpu.
Contains the total nanoseconds (10-9 seconds) CPU capacity on the host has been used since boot.
Contains the total nanoseconds (10-9 seconds) since boot each CPU has been in use. Per-CPU usage can help you identify core imbalances, which can be caused by bad configuration.
Assigns individual CPUs and memory nodes to tasks in a cgroup.
Memory metrics are found in the "memory" cgroup. To enable memory control group, add the following kernel command-line parameters:
The metrics are available in "memory.stat".More details:
The value for memory limit is available in:
Tags network packets with a tag identifier (classid) that allow the Linux traffic controller (tc) to identify packets.
The namespace subsystem.
The recommended location for cgroup hierarchies: