How to Optimize Network Performance with `sysctl` Tuning on FreeBSD
sysctl
tuning on FreeBSDCategories:
6 minute read
FreeBSD is renowned for its robust networking capabilities and exceptional performance, making it an excellent choice for servers, routers, and network appliances. One of the most powerful tools for optimizing network performance on FreeBSD is the sysctl
interface, which provides direct access to kernel parameters that control various aspects of system behavior, including network stack performance.
This article offers a comprehensive guide to optimizing network performance on FreeBSD systems through strategic sysctl
tuning. We’ll explore the most important networking parameters, explain their functions, provide recommended values for different scenarios, and discuss methodologies for testing and validating your configurations.
Understanding sysctl
on FreeBSD
The sysctl
utility allows administrators to view and modify kernel parameters at runtime. On FreeBSD, the networking stack is highly configurable through various sysctl
parameters, organized in hierarchical namespaces such as net.inet
, net.inet6
, and kern
.
To view the current value of a parameter:
sysctl net.inet.tcp.sendspace
To set a new value temporarily (until next reboot):
sysctl net.inet.tcp.sendspace=131072
For permanent changes, add the parameter to /etc/sysctl.conf
:
net.inet.tcp.sendspace=131072
Baseline Assessment
Before making any changes, establish a performance baseline to measure improvements:
# Check current network throughput
netstat -i
systat -ifstat
# Examine TCP connection statistics
netstat -s -p tcp
# Check for packet drops and errors
netstat -s | grep -i drop
netstat -s | grep -i error
Tools like iperf3
and netperf
can provide more detailed performance metrics:
# Install iperf3
pkg install iperf3
# Run a benchmark (server side)
iperf3 -s
# Run a benchmark (client side)
iperf3 -c server_ip -P 4 -t 30
Essential Network sysctl
Parameters
TCP Buffer Sizes
TCP buffer sizes significantly impact throughput, especially on high-bandwidth or high-latency connections:
# Increase TCP send and receive buffers
net.inet.tcp.sendspace=131072 # Default: 32768
net.inet.tcp.recvspace=131072 # Default: 65536
For high-bandwidth, high-latency networks (high bandwidth-delay product), consider larger values:
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
TCP Connection Parameters
Optimize how TCP establishes and maintains connections:
# Increase the TCP connection backlog queue
kern.ipc.somaxconn=4096 # Default: 128
net.inet.tcp.syncache.bucketlimit=1024 # Default: 30
net.inet.tcp.syncache.cachelimit=30720 # Default: 15360
# Expedite connection establishment
net.inet.tcp.fastopen.server_enable=1 # Default: 0
net.inet.tcp.fastopen.client_enable=1 # Default: 0
TCP Congestion Control
FreeBSD offers several congestion control algorithms. The default newreno
works well in most environments, but alternatives may perform better in specific scenarios:
# View available congestion control algorithms
sysctl net.inet.tcp.cc.available
# Set the default congestion control algorithm
net.inet.tcp.cc.algorithm=cubic # Options: newreno, cubic, htcp, etc.
For high-speed networks, cubic
or htcp
often outperforms newreno
:
net.inet.tcp.cc.algorithm=cubic
TCP Timestamps and Window Scaling
Ensure TCP timestamps and window scaling are enabled for optimal performance:
net.inet.tcp.rfc1323=1 # Enable TCP timestamps and window scaling
Network Memory Management
Tune network memory allocation for better performance:
# Increase network memory limits
kern.ipc.maxsockbuf=16777216 # Maximum socket buffer size (16MB)
net.inet.tcp.recvbuf_max=16777216 # Maximum TCP receive buffer
net.inet.tcp.sendbuf_max=16777216 # Maximum TCP send buffer
Interface Queue Length
Adjust interface queue lengths for high-throughput scenarios:
# Increase interface queue length
net.link.ifqmaxlen=2048 # Default: 50
FIN_WAIT_2 State
Reduce the time sockets spend in the FIN_WAIT_2 state:
net.inet.tcp.finwait2_timeout=30000 # 30 seconds (default: 60000)
Network Stack Limits
Increase global network stack limits:
kern.ipc.maxsockets=204800 # Maximum number of sockets
kern.ipc.maxsockbuf=16777216 # Maximum socket buffer size
Optimization for Specific Scenarios
High-Throughput File Servers
For servers handling large file transfers:
# Increase TCP buffer sizes
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
# Optimize TCP for large transfers
net.inet.tcp.delayed_ack=0 # Disable delayed ACKs
net.inet.tcp.mssdflt=1448 # Optimize MSS for typical MTU
# Increase socket buffer limits
kern.ipc.maxsockbuf=16777216
Web Servers with Many Concurrent Connections
For web servers handling thousands of concurrent connections:
# Increase connection queue limits
kern.ipc.somaxconn=8192
net.inet.tcp.syncache.bucketlimit=2048
net.inet.tcp.syncache.cachelimit=61440
# Optimize TIME_WAIT handling
net.inet.tcp.fast_finwait2_recycle=1
net.inet.tcp.finwait2_timeout=5000 # 5 seconds
# Increase file descriptor limits
kern.maxfiles=200000
kern.maxfilesperproc=150000
Low-Latency Applications
For applications requiring minimal latency:
# Disable delayed ACKs
net.inet.tcp.delayed_ack=0
# Reduce send/receive buffer sizes
net.inet.tcp.sendspace=16384
net.inet.tcp.recvspace=16384
# Optimize for low latency
net.inet.tcp.minmss=536
net.inet.ip.intr_queue_maxlen=2048
Routers and Firewalls
For systems acting as routers or firewalls:
# Enable packet forwarding
net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1
# Increase mbuf clusters
kern.ipc.nmbclusters=262144
# Optimize for packet forwarding
net.inet.ip.process_options=0
net.inet.ip.random_id=0
net.inet.ip.redirect=0
IPv6 Optimization
For systems using IPv6:
# Enable IPv6
net.inet6.ip6.accept_rtadv=1
# Optimize IPv6 parameters
net.inet6.ip6.auto_flowlabel=1
net.inet6.ip6.use_tempaddr=1
net.inet6.ip6.prefer_tempaddr=1
# Increase IPv6 buffer sizes
net.inet6.ip6.maxfragpackets=1024
net.inet6.ip6.maxfrags=1024
RACK and BBR Support
Recent FreeBSD versions support advanced TCP algorithms like RACK (Recent ACKnowledgment) and BBR (Bottleneck Bandwidth and RTT):
# Enable RACK
net.inet.tcp.rack.enable=1
# Use BBR congestion control (if available)
net.inet.tcp.cc.algorithm=bbr
Network Memory Tuning
Tune network memory allocation for better performance:
# Increase network memory limits
kern.ipc.maxsockbuf=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_max=16777216
# Adjust mbuf cluster settings
kern.ipc.nmbclusters=262144
kern.ipc.nmbjumbop=131072 # For jumbo frames
Monitoring and Validation
After applying changes, monitor performance to validate improvements:
# Monitor network throughput
systat -ifstat
netstat -w 1 -I interface_name
# Check TCP statistics
netstat -s -p tcp
# Monitor system resource usage
top -P
vmstat 1
Use performance testing tools:
# Test TCP throughput
iperf3 -c server_ip -P 4 -t 30
# Test connection establishment rate
netperf -H server_ip -t TCP_CRR
# Test latency
ping -c 100 server_ip | tail -1
Automating Tuning with Scripts
Create a script for automated tuning based on workload:
#!/bin/sh
# /usr/local/sbin/optimize_network.sh
# Detect system memory
MEM=$(sysctl -n hw.physmem)
MEM_GB=$((MEM / 1073741824))
# Scale TCP buffers based on available memory
if [ $MEM_GB -ge 64 ]; then
# High-memory system
sysctl net.inet.tcp.sendspace=262144
sysctl net.inet.tcp.recvspace=262144
sysctl kern.ipc.maxsockbuf=16777216
elif [ $MEM_GB -ge 16 ]; then
# Medium-memory system
sysctl net.inet.tcp.sendspace=131072
sysctl net.inet.tcp.recvspace=131072
sysctl kern.ipc.maxsockbuf=8388608
else
# Low-memory system
sysctl net.inet.tcp.sendspace=65536
sysctl net.inet.tcp.recvspace=65536
sysctl kern.ipc.maxsockbuf=4194304
fi
# Common optimizations
sysctl net.inet.tcp.delayed_ack=0
sysctl net.inet.tcp.cc.algorithm=cubic
sysctl kern.ipc.somaxconn=4096
Persistent Configuration
For permanent changes, add optimized parameters to /etc/sysctl.conf
:
# TCP Buffer Sizes
net.inet.tcp.sendspace=131072
net.inet.tcp.recvspace=131072
# TCP Connection Parameters
kern.ipc.somaxconn=4096
net.inet.tcp.syncache.bucketlimit=1024
net.inet.tcp.syncache.cachelimit=30720
# Congestion Control
net.inet.tcp.cc.algorithm=cubic
# Network Memory Management
kern.ipc.maxsockbuf=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_max=16777216
# Interface Queue Length
net.link.ifqmaxlen=2048
# FIN_WAIT_2 State
net.inet.tcp.finwait2_timeout=30000
# Network Stack Limits
kern.ipc.maxsockets=204800
Considerations and Best Practices
Incremental Changes: Make adjustments incrementally and test after each change.
Workload-Specific Tuning: Tailor optimizations to your specific workload rather than applying general recommendations.
Document Changes: Keep a record of all changes for future reference and troubleshooting.
Regular Re-evaluation: Network optimization is not a one-time task. Regularly re-evaluate as workloads change.
Balance Resources: Network tuning often involves trade-offs between memory usage, CPU utilization, and latency.
System-Wide Impact: Remember that network tuning can affect overall system performance, not just networking.
Conservative Defaults: FreeBSD’s default values are conservative by design. Only change what’s necessary for your specific needs.
Troubleshooting Common Issues
High Retransmission Rates
If you see high retransmission rates:
netstat -s -p tcp | grep retrans
Try adjusting:
net.inet.tcp.mssdflt=1448
net.inet.tcp.minmss=536
net.inet.tcp.rfc3390=1
Connection Timeouts
For connection establishment timeouts:
net.inet.tcp.keepinit=5000 # Reduce from default 75000 (75 seconds)
net.inet.tcp.synack_retries=3 # Reduce from default 5
Memory-Related Issues
If you encounter memory allocation failures:
net.inet.tcp.sendbuf_inc=16384 # Smaller increments for buffer auto-scaling
net.inet.tcp.recvbuf_inc=16384
Conclusion
Proper sysctl
tuning can significantly improve network performance on FreeBSD systems, but it requires a thoughtful, measured approach. Start with understanding your specific workload requirements, establish baseline metrics, make incremental changes, and validate improvements through rigorous testing.
The parameters discussed in this article provide a solid foundation for network optimization, but remember that every environment is unique. The best tuning strategy is one that directly addresses your specific performance bottlenecks and workload characteristics.
FreeBSD’s networking stack is extraordinarily flexible and powerful, allowing for fine-grained control through sysctl
parameters. By leveraging this flexibility, you can achieve exceptional network performance tailored precisely to your needs.
Remember to consult the FreeBSD Handbook and man pages for the most up-to-date information on sysctl
parameters, as they may change with different FreeBSD versions. Regular performance monitoring and periodic re-evaluation of your tuning parameters will ensure your system continues to deliver optimal network performance as your workload evolves.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.