Chapter 22. Tuning

Table of Contents

22.1. Linux Kernel Parameters
22.1.1. IP Ports
22.1.2. TCP Buffer Sizes
22.1.3. Queue Sizes
22.1.4. Congestion Control
22.1.5. Setting Linux kernel parameters with sysctl
22.2. Linux User Limits
22.2.1. SysVinit User Limits
22.2.2. Systemd User Limits
22.3. JVM Tuning
22.3.1. JVM Memory Allocation
22.3.2. JVM Garbage Collection
22.3.3. JVM Temporary Files
22.4. Server Thread Pooling
22.5. Database Connection Pooling

22.1. Linux Kernel Parameters

GNU/Linux distributions are generally not configured to run more demanding server processes out-of-the-box. So, running SavaPage with high load on a vanilla GNU/Linux OS can easily result in a degraded performance.

Performance bottlenecks are usually due to OS, TCP stack and network settings meant for desktop user sessions, and not for server processes that are intensively used by many network clients. Fortunately, it is easy to unleash the full potential of your GNU/Linux host with a few simple tweaks. The message is that SavaPage scales perfectly if you apply the right kernel settings.

Relevant kernel parameters and settings are discussed in the next sections. The last section summarizes the suggested settings and describes how to apply them. See Section 22.1.5, “Setting Linux kernel parameters with sysctl”.

Note

Kernel parameters with ipv4 in their names also apply to TCP over IPv6.

22.1.1. IP Ports

As many outgoing connections are concurrently established from SavaPage, we must make sure Linux does not run low on ephemeral local ports[42] and reuse sockets with state TIME_WAIT.

net.ipv4.ip_local_port_range = 1024 65535   1 
net.ipv4.tcp_tw_reuse = 1                   2

1

Broaden the ephemeral local port range.

2

Enable the reuse of sockets with state TIME_WAIT. This is particularly useful in environments where numerous short connections are open and left in TIME_WAIT state, such as in SavaPage.

22.1.2. TCP Buffer Sizes

Linux does a good job of auto-tuning the TCP buffers, but the default maximum sizes are still very small. Here are sample settings for 1Gb and 10Gb network.

# Settings for 1Gb network (16Mb buffer)
net.core.rmem_max = 16777216                1
net.core.wmem_max = 16777216                2
net.ipv4.tcp_rmem = 4096 87380 16777216     3
net.ipv4.tcp_wmem = 4096 16384 16777216     4                           

# Settings for 10Gb network (32Mb buffer)
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 87380 33554432
net.ipv4.tcp_wmem = 4096 16384 33554432

# Settings for 10Gb network (54Mb buffer)
net.core.rmem_max = 56623104
net.core.wmem_max = 56623104
net.ipv4.tcp_rmem = 4096 87380 56623104
net.ipv4.tcp_wmem = 4096 16384 56623104

1

Max size (bytes) of the TCP receive buffer as settable with setsockopt.

2

Max size (bytes) of the TCP send buffer as settable with setsockopt.

3

Auto-tuning limits (bytes) for TCP receive buffer: min, default, and max number of bytes.

4

Auto-tuning limits (bytes) for TCP send buffer: min, default, and max number of bytes.

22.1.3. Queue Sizes

While a socket is listening and busy, new connection requests will pile up. The kernel keeps pending connection requests in a buffer. When the buffer is full new requests will fail. You can increase several buffer sizes.

net.core.somaxconn = 4096               1
net.core.netdev_max_backlog = 16384     2
net.ipv4.tcp_max_syn_backlog = 8192     3
net.ipv4.tcp_syncookies = 1             4

1

Max number of queued connections on a socket. The default of 128 is too low: we raise this value substantially to support bursts of request.

2

Max number of packets, queued on the input side, when the interface receives packets faster than the kernel can process them.

3

Max number half open SYN requests to keep in memory.

4

Enable SYN cookies to harden the TCP/IP stack against SYN floods.

22.1.4. Congestion Control

Congestion refers to a network state where a node or link carries so much data that it may deteriorate network service quality, resulting in queuing delay, frame or data packet loss and the blocking of new connections.

In a congested network, response time slows with reduced network throughput. Congestion occurs when bandwidth is insufficient and network data traffic exceeds capacity.

Linux supports pluggable congestion control (avoidance) algorithms. To get a list of congestion control algorithms that are available in your kernel run the command:

sudo sysctl net.ipv4.tcp_available_congestion_control

If cubic and/or htcp are not listed then you will need to research the control algorithms for your kernel. If available set the control to cubic:

net.ipv4.tcp_congestion_control = cubic

22.1.5. Setting Linux kernel parameters with sysctl

Edit the file /etc/sysctl.conf like this:

sudo vi /etc/sysctl.conf

and add the following lines, that summarize the previously discussed kernel parameters, at the end of the file:

#------------------------------------------------------
# SavaPage Settings for 1Gb network
#------------------------------------------------------
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1

# Only if cubic is available
net.ipv4.tcp_congestion_control = cubic

You can apply the settings without rebooting the server with this command:

sudo sysctl -p



[42] An established TCP/IP connection can be regarded as a 4-tuple (server IP, server port, client IP, client port). Three of the four are evident, i.e. the client uses its own IP address to connect to the server's IP address and service port. However, the connection also needs a port number at the client side. Unless the client program explicitly requests a port number, this port number is called an ephemeral port number. Ephemeral ports are temporary issued by the IP stack of the client OS from a dedicated port range.