Kernel Runtime Configuration: Difference between revisions
No edit summary |
|||
(7 intermediate revisions by the same user not shown) | |||
Line 8: | Line 8: | ||
=Overview= | =Overview= | ||
The kernel can be configured at three different stages in its lifecycle: | |||
# At compilation. This is when defaults are set. | |||
# At startup, in the moment it is invoked from a boot loader (see https://wiki.archlinux.org/index.php/kernel_parameters) | |||
# At runtime, from files in /proc and /sys. | |||
The values of the configuration properties described below can be obtained with: | The values of the configuration properties described below can be obtained with: | ||
[[sysctl]] <property-name> | |||
sysctl <property-name | |||
Alternatively, they can be read from /proc/sys/<property-name-based-path>. | Alternatively, they can be read from /proc/sys/<property-name-based-path>. | ||
Line 23: | Line 26: | ||
=<tt>/etc/sysctl.conf</tt>= | =<tt>/etc/sysctl.conf</tt>= | ||
{{Internal|/etc/sysctl.conf|/etc/sysctl.conf}} | |||
=Configuration Properties= | =Configuration Properties= | ||
Line 129: | Line 132: | ||
Disable/enables cache metrics so the initial conditions of the closed connections will not be saved to be used in near future connections. 1 means disabled. | Disable/enables cache metrics so the initial conditions of the closed connections will not be saved to be used in near future connections. 1 means disabled. | ||
====net.ipv4.ip_forward==== | |||
{{Internal|IP Forwarding#Overview|IP Forwarding}} | |||
====<span id='net.ipv4.tcp_keepalive_time'></span><span id='net.ipv4.tcp_keepalive_intvl'></span><span id='net.ipv4.tcp_keepalive_probes'></span>net.ipv4.tcp_keepalive_time, net.ipv4.tcp_keepalive_probes, net.ipv4.tcp_keepalive_intvl==== | |||
{{Internal|TCP_KeepAlive#Linux|TCP Keep-Alive}} | |||
==vm== | ==vm== |
Latest revision as of 17:50, 26 July 2018
External
- sysctl tweaks https://wiki.mikejung.biz/Sysctl_tweaks
Internal
Overview
The kernel can be configured at three different stages in its lifecycle:
- At compilation. This is when defaults are set.
- At startup, in the moment it is invoked from a boot loader (see https://wiki.archlinux.org/index.php/kernel_parameters)
- At runtime, from files in /proc and /sys.
The values of the configuration properties described below can be obtained with:
sysctl <property-name>
Alternatively, they can be read from /proc/sys/<property-name-based-path>.
Example, net.core.wmem_max can be read from:
cat /proc/sys/net/core/wmem_max
/etc/sysctl.conf
Configuration Properties
fs
kernel
kernel.hostname
kernel.pid_max
Represents the maximum number of processes allowed on the system.
net
core
net.core.rmem_default
The default setting in bytes of the socket receive buffer, for all protocol. It may be overwritten by more specific per-protocol values, such as net.ipv4.tcp_rmem. This value may be set by the application using SO_RCVBUF socket option, up to net.core.rmem_max bytes. This value can be changed without reboot.
net.core.rmem_max
The maximum socket receive buffer size. Application can change the size of the receive buffer using SO_RCVBUF up to this limit. This value can be changed without reboot.
More details: https://wiki.mikejung.biz/Sysctl_tweaks#net.core.rmem_max
net.core.wmem_default
The default setting in bytes of the socket send buffer. It may be overwritten by more specific per-protocol values, such as net.ipv4.tcp_wmem. This may be set by the application using SO_SNDBUF socket option, up to net.core.wmem_max bytes. This value can be changed without reboot.
net.core.wmem_max
The maximum socket receive buffer size. The socket buffer size may be set by using the SO_SNDBUF socket option. This value can be changed without reboot.
More details: https://wiki.mikejung.biz/Sysctl_tweaks#net.core.wmem_max
net.core.netdev_max_backlog
'net.core.netdev_max_backlog' determines the maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.
More details:
- How to tune 'net.core.netdev_max_backlog' and 'net.core.netdev_budget' sysctl kernel tunables? https://access.redhat.com/solutions/1241943
- https://wiki.mikejung.biz/Sysctl_tweaks#net.core.netdev_max_backlog
ipv4
net.ipv4.tcp_rmem
'net.ipv4.tcp_rmem' is a vector of 3 integers: [min, default, max]. These parameters are used by TCP to adjust receive buffer sizes. TCP dynamically adjusts the size of the receive buffer from the defaults specified by the attribute, in the range of these values, depending on memory available in the system.
"min" is the minimum size of the receive buffer used by each TCP socket. The default value is the system page size. This value is used to ensure that in memory pressure mode, allocations below this size will still succeed. This is not used to limit the size of the receive buffer declared using SO_RCVBUF on a socket.
"default" is the default size of the receive buffer for a TCP socket. This value overwrites the initial default buffer size from the generic net.core.rmem_default defined for all protocols. If larger receive buffer sizes are desired, this value should be increased, and it will affect all sockets. To enable large TCP windows, net.ipv4.tcp_window_scaling must be enabled.
"max" is the maximum size of the receive buffer used by each TCP socket. The default value is calculated using the formula:
max(87380, min(4MB, tcp_mem[1]*PAGE_SIZE/128))
This value does NOT override the global net.core.rmem_max. This is not used to limit the size of the receive buffer declared using SO_RCVBUF on a socket.
Example:
net.ipv4.tcp_rmem=4096 12582912 16777216
More details:
net.ipv4.tcp_wmem
'net.ipv4.tcp_wmen' is a vector of 3 integers: [min, default, max]. These parameters are used by TCP to adjust send buffer sizes. TCP dynamically adjusts the size of the send buffer from the default values listed below, in the range of these values, depending on memory available.
"min" represents the minimum size of the send buffer used by each TCP socket. The default value is the system page size. This value is used to ensure that in memory pressure mode, allocations below this size will still succeed. This is not used to bound the size of the send buffer declared using SO_SNDBUF on a socket.
"default" represents the default size of the send buffer for a TCP socket. This value overwrites the initial default buffer size from the generic global net.core.wmem_default defined for all protocols. If larger send buffer sizes are desired, this value should be increased, and it will affect all sockets. To allow for large TCP windows, net.ipv4.tcp_window_scaling must be set to a non-zero value.
"max" represents the maximum size of the send buffer used by each TCP socket. This value does not override net.core.wmem_max. This is not used to limit the size of the send buffer declared using SO_SNDBUF on a socket. The default value is calculated using the formula:
max(65536, min(4MB, tcp_mem[1]*PAGE_SIZE/128))
Example:
net.ipv4.tcp_wmem=4096 12582912 16777216
More details:
net.ipv4.tcp_window_scaling
1 enables TCP window scaling, which will allow the kernel to dynamically increase the TCP window.
net.ipv4.tcp_timestamps
TCP timestamp feature allows round trip time measurement, by adding 8 bytes to TCP header. TCP timestamps may be turned off to reduce performance spikes related to timestamp generation. 1 indicates that timestamps are on, 0 indicates they are off.
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack
Disables/enables select acknowledgements (SACK). 1 means disabled.
net.ipv4.tcp_no_metrics_save
Disable/enables cache metrics so the initial conditions of the closed connections will not be saved to be used in near future connections. 1 means disabled.