Netværksingeniører støder ofte på situationer, hvor TCP Windowing eller applikation ydeevne er skylden på netværksinfrastruktur. Efter at have udført omfattende pakker indfangning, tcpdumps, og netværk analyse, er den sande flaskehals ofte opdaget: udmattet NIC (Network Interface Card) eller OS- niveau buffere på klienten eller server-systemer.
Denne artikel giver både arv (circa 2009) og aktuelle (2025-2026) buffer konfigurationer til Linux, Windows og macoS, sammen med diagnostiske teknikker til at identificere buffer udmattelse, før det bliver et kritisk spørgsmål.
TCP bruger en flowkontrolmekanisme, hvor modtageren annoncerer en "vinduesstørrelse", der angiver, hvor mange data den kan acceptere. Når systemet buffere fyldes op, dette vindue krymper til nul, tvinger afsenderen til at vente. Dette vises som et netværk problem, men er faktisk en vært ressource problem.
# Check current TCP buffer settings sysctl net.ipv4.tcp_rmem sysctl net.ipv4.tcp_wmem sysctl net.core.rmem_max sysctl net.core.wmem_max # Check NIC ring buffer sizes ethtool -g eth0 # Monitor socket buffer usage ss -tm # Check for TCP zero window events tcpdump -i any 'tcp[tcpflags] & tcp-push != 0' -vv # Check network statistics for buffer issues netstat -s | grep -i "buffer\|queue\|drop"
# Check TCP parameters
netsh interface tcp show global
# View network adapter buffer settings
Get-NetAdapterAdvancedProperty -Name "Ethernet" | Where-Object {$_.DisplayName -like "*buffer*"}
# Monitor TCP statistics
netstat -s -p tcp
# Check receive window auto-tuning
netsh interface tcp show global | findstr "Receive Window"
# Check current buffer settings sysctl kern.ipc.maxsockbuf sysctl net.inet.tcp.sendspace sysctl net.inet.tcp.recvspace # View network statistics netstat -s -p tcp # Monitor socket buffers netstat -an -p tcp
| Parameter | Legacy Value (2009) | Varebeskrivelse |
|---|---|---|
| net.core.rmem _ default | 124928 (122KB) | Standard modtager sokkelbufferstørrelse |
| net.core.rmem _ max | 131071 (128KB) | Maksimal indsugningsbufferstørrelse |
| net.core.wmem _ default | 124928 (122KB) | Standard send sokkel bufferstørrelse |
| net.core.wmem _ max | 131071 (128KB) | Maksimal senderhulsbufferstørrelse |
| net.ipv4.tcp _ rmem | 4096 87380 174760 | TCP modtager buffer: min, standard, max (i bytes) |
| net.ipv4.tcp _ wmem | 4096 16384 131072 | TCP send buffer: min, standard, max (i bytes) |
| net.ipv4.tcp _ mem | 196608 262144 393216 | TCP hukommelse sider: lav, tryk, høj |
| net.core.netdev _ max _ backlog | 1000 | Maksimale pakker i indgangskø |
| net.core.optmem _ max | 10240 (10KB) | Maksimal bufferstørrelse pr. sokkel |
| Parameter | Nuværende anbefalede værdi | Description |
|---|---|---|
| net.core.rmem_default | 16777216 (16MB) | Default receive socket buffer size |
| net.core.rmem_max | 134217728 (128MB) | Maximum receive socket buffer size |
| net.core.wmem_default | 16777216 (16MB) | Default send socket buffer size |
| net.core.wmem_max | 134217728 (128MB) | Maximum send socket buffer size |
| net.ipv4.tcp_rmem | 4096 87380 134217728 | TCP modtager buffer: min, standard, max (128MB max) |
| net.ipv4.tcp_wmem | 4096 65536 134217728 | TCP send buffer: min, standard, max (128MB max) |
| net.ipv4.tcp_mem | 8388608 12582912 16777216 | TCP hukommelse sider: lav, tryk, høj (64GB system) |
| net.core.netdev_max_backlog | 250000 | Maksimale pakker i indgangskø (10GbE +) |
| net.core.optmem_max | 65536 (64KB) | Maximum ancillary buffer size per socket |
| net.ipv4.tcp _ training _ control | bfr | Brug BBR kapacitetsbegrænsning kontrol (Googles algoritme) |
| net.ipv4.tcp _ window _ skalering | 1 | Aktivér TCP vinduesskalering (RFC 1323) |
| net.ipv4.tcp _ timefrimærker | 1 | Aktivér TCP tidsstempler for bedre RTT estimering |
| net.ipv4.tcp _ sack | 1 | Aktivér selektiv anerkendelse |
| net.ipv4.tcp _ no _ metrics _ save | 1 | Deaktivér caching af TCP-målinger |
Tilføj disse indstillinger til /etc/sysctl.conf eller oprette en ny fil /etc/sysctl.d/99-network-tuning.conf:
# Network Buffer Tuning for High-Performance Applications # Optimized for 10GbE+ networks with RTT up to 300ms # Core socket buffer settings net.core.rmem_default = 16777216 net.core.rmem_max = 134217728 net.core.wmem_default = 16777216 net.core.wmem_max = 134217728 # TCP buffer settings net.ipv4.tcp_rmem = 4096 87380 134217728 net.ipv4.tcp_wmem = 4096 65536 134217728 net.ipv4.tcp_mem = 8388608 12582912 16777216 # Device buffer settings net.core.netdev_max_backlog = 250000 net.core.netdev_budget = 50000 net.core.netdev_budget_usecs = 5000 net.core.optmem_max = 65536 # TCP optimizations net.ipv4.tcp_congestion_control = bbr net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_sack = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 # Apply with: sysctl -p /etc/sysctl.d/99-network-tuning.conf
# Check current ring buffer sizes ethtool -g eth0 # Set maximum ring buffer sizes (adjust based on NIC capabilities) ethtool -G eth0 rx 4096 tx 4096 # Make persistent by adding to /etc/network/interfaces or systemd service
| Parameter | Legacy Value (2009) | Sted |
|---|---|---|
| TcpWindowSize | 65535 (64KB) | Registry: HKLM\ System\ CurrentControlSet\ Services\ Tcpip\ Parametre |
| Tcp1323Opts | 0 (deaktiveret) | Vinduesskalering deaktiveret som standard |
| DefaultReceiveWindow | 8192 (8KB) | Standard modtager vindue |
| DefaultSendWindow | 8192 (8KB) | Standard send vindue |
| GlobalMaxTcpWindowSize | 65535 (64KB) | Maksimal TCP-vinduesstørrelse |
| TcpNumConnections | 16777214 | Maksimale TCP-forbindelser |
Moderne Windows bruger Modtag vindue automatisk - Tuning funktion, som dynamisk justerer modtager buffere baseret på netværksforhold.
| Funktion | Nuværende anbefalede indstilling | Description |
|---|---|---|
| Auto- Tuneringsniveau | normal (eller meget eksperimentel for 10GbE +) | Dynamisk modtagelse af vinduesjustering |
| Modtage- sideskalering (RSS) | aktiveret | Fordel netværksbehandling på tværs af CPU 'er |
| Chimney Offload | automatisk (eller deaktiveret på moderne NIC 'er) | TCP-aflæsning til NIC-hardware |
| NetDMA | deaktiveret | Direkte hukommelsesadgang (forældet) |
| TCP globale parametre | Se kommandoer nedenfor | Systembrede TCP-indstillinger |
| Indtrængselsudbyder | CUBIC (eller NewReno fallback) | TCP-kapacitetsstyringsalgoritme |
# Check current auto-tuning level netsh interface tcp show global # Enable auto-tuning (normal mode - default for most scenarios) netsh interface tcp set global autotuninglevel=normal # For high-bandwidth, high-latency networks (10GbE+, data center environments) netsh interface tcp set global autotuninglevel=experimental # For conservative tuning (if experimental causes issues) netsh interface tcp set global autotuninglevel=restricted # For very conservative tuning (not recommended for high-performance networks) netsh interface tcp set global autotuninglevel=highlyrestricted # Enable CUBIC congestion provider (Windows Server 2022/Windows 11+ only) netsh interface tcp set supplemental template=Internet congestionprovider=cubic # Note: Windows 10 and Server 2019 use Compound TCP or NewReno by default # CUBIC is not available on these older versions # Enable Receive-Side Scaling (RSS) netsh interface tcp set global rss=enabled # Set chimney offload (automatic is recommended) netsh interface tcp set global chimney=automatic # Disable NetDMA (recommended for modern systems) netsh interface tcp set global netdma=disabled # Enable Direct Cache Access (if supported) netsh interface tcp set global dca=enabled # Enable ECN (Explicit Congestion Notification) netsh interface tcp set global ecncapability=enabled # Set initial congestion window to 10 (RFC 6928) netsh interface tcp set global initialRto=3000
# View current adapter settings Get-NetAdapterAdvancedProperty -Name "Ethernet" # Increase receive buffers (adjust based on NIC) Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Receive Buffers" -DisplayValue 2048 # Increase transmit buffers Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Transmit Buffers" -DisplayValue 2048 # Enable Jumbo Frames (if network supports it) Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Jumbo Packet" -DisplayValue 9014 # Enable Large Send Offload (LSO) Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Large Send Offload V2 (IPv4)" -DisplayValue Enabled Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Large Send Offload V2 (IPv6)" -DisplayValue Enabled
# These settings are typically NOT needed on Windows 10/11 due to auto-tuning # Only modify if auto-tuning is disabled or problematic # Registry path: HKLM\System\CurrentControlSet\Services\Tcpip\Parameters # Maximum TCP window size (if auto-tuning disabled) # TcpWindowSize = 16777216 (16MB) - REG_DWORD # Enable window scaling (enabled by default on modern Windows) # Tcp1323Opts = 3 - REG_DWORD # Number of TCP Timed Wait Delay # TcpTimedWaitDelay = 30 - REG_DWORD (default 240)
| Parameter | Legacy Value (2009) | Description |
|---|---|---|
| kern.ipc.maxsockbuf | 262144 (256KB) | Maksimal stødpudestørrelse |
| net.inet.tcp.sendspace | 32768 (32KB) | Standard TCP send buffer |
| net.inet.tc.p.recvspace | 32768 (32KB) | Standard TCP modtager buffer |
| net.inet.tcp.autorcvbufmax | 131072 (128KB) | Maksimal automatisk modtagebuffer |
| net.inet.tcp.autosndbufmax | 131072 (128KB) | Maksimal auto- tunet sendebuffer |
| net.inet.tcp.rfc1323 | 0 (disabled) | Name |
| Parameter | Current Recommended Value | Description |
|---|---|---|
| kern.ipc.maxsockbuf | 8388608 (8MB) | Maximum socket buffer size |
| net.inet.tcp.sendspace | 131072 (128KB) | Default TCP send buffer |
| net.inet.tcp.recvspace | 131072 (128KB) | Default TCP receive buffer |
| net.inet.tcp.autorcvbufmax | 16777216 (16MB) | Maximum auto-tuned receive buffer |
| net.inet.tcp.autosndbufmax | 16777216 (16MB) | Maximum auto-tuned send buffer |
| net.inet.tcp.rfc1323 | 1 (aktiveret) | Aktivér skalering af TCP - vinduet |
| net.inet.tcp.sack | 1 (enabled) | Enable Selective Acknowledgment |
| net.inet.tcp.mssdflt | 1440 | Standard TCP maksimal segmentstørrelse |
| net.inet.tc.p.forsinket _ ack | 3 | Forsinket ACK adfærd |
# Check current settings sysctl kern.ipc.maxsockbuf sysctl net.inet.tcp.sendspace sysctl net.inet.tcp.recvspace sysctl net.inet.tcp.autorcvbufmax sysctl net.inet.tcp.autosndbufmax # Apply settings temporarily (until reboot) sudo sysctl -w kern.ipc.maxsockbuf=8388608 sudo sysctl -w net.inet.tcp.sendspace=131072 sudo sysctl -w net.inet.tcp.recvspace=131072 sudo sysctl -w net.inet.tcp.autorcvbufmax=16777216 sudo sysctl -w net.inet.tcp.autosndbufmax=16777216 sudo sysctl -w net.inet.tcp.rfc1323=1 sudo sysctl -w net.inet.tcp.sack=1 # Make settings persistent (create /etc/sysctl.conf) sudo tee /etc/sysctl.conf <Oprettelse af en LaunchDaemon til vedvarende indstillinger
# Create /Library/LaunchDaemons/com.local.sysctl.plist sudo tee /Library/LaunchDaemons/com.local.sysctl.plist <EOF sudo chmod 644 /Library/LaunchDaemons/com.local.sysctl.plist sudo launchctl load /Library/LaunchDaemons/com.local.sysctl.plist Label com.local.sysctl ProgramArguments /usr/sbin/sysctl -w kern.ipc.maxsockbuf=8388608 RunAtLoad Advarsel: macOS Ventura (13) og senere har systemintegritet (SIP) begrænsninger. Nogle kerneparametre kan måske ikke ændres selv med sudo. Test indstillinger i dit specifikke miljø.
# Server side iperf3 -s # Client side - test TCP throughput iperf3 -c server_ip -t 60 -i 5 -w 16M # Test with multiple parallel streams iperf3 -c server_ip -P 10 -t 60 # Test UDP performance iperf3 -c server_ip -u -b 1000M -t 60
# Capture and display TCP window sizes tcpdump -i any -n 'tcp' -vv | grep -i window # Save capture for Wireshark analysis tcpdump -i any -w /tmp/capture.pcap 'tcp port 443'
Se efter disse indikatorer for bufferudstedelser:
# Linux - Monitor network buffer statistics watch -n 1 'cat /proc/net/sockstat' watch -n 1 'ss -tm | grep -i mem' # Check for drops netstat -s | grep -i drop # Windows - Monitor TCP statistics netstat -e 1 # macOS - Monitor network statistics netstat -s -p tcp
For at bestemme optimale bufferstørrelser for dit netværk, beregnes Bandwidth- Delay Product:
BDP = Bandwidth (bits/sec) × RTT (seconds) Example for 10 Gigabit Ethernet with 50ms RTT: BDP = 10,000,000,000 × 0.050 = 500,000,000 bits = 62.5 MB Buffer Size = BDP × 2 (for bidirectional traffic and headroom) Buffer Size = 62.5 MB × 2 = 125 MB This is why modern settings recommend 128MB maximum buffers.
| Type arbejdsbelastning | Anbefalet bufferstørrelse | Nøgleparametre |
|---|---|---|
| Webserver (lav latency) | 4- 16 MB | Lavere buffere, flere forbindelser, hurtig reaktion |
| Databaseserver | 16- 32 MB | Moderate puffere, ensartet gennemstrømning |
| Filoverførsel / backup | 64- 128 MB | Maksimale buffere, høj kapacitet prioritet |
| Videostreaming | 32- 64 MB | Store buffere, konsekvent leveringshastighed |
| HPC / datacenter | 128- 256 MB | Maksimale buffere, specialiseret kontrol af overbelastning |
| Trådløs / mobil | 2-8 MB | Konservative buffere, håndtering med variabel latency |
Buffer udmattelse er en almindelig årsag til performance spørgsmål, der synes at være netværksrelateret. Ved at forstå udviklingen af buffer dimensionering fra 2009 's 128KB grænser til dagens 128MB kapaciteter, netværk ingeniører kan hurtigt identificere og løse disse spørgsmål.
Nøgletakeaways:
Husk: Et "netværk problem" afsløret ved pakkeanalyse for at vise TCP nul vinduer er faktisk en vært system ressource problem. Med ordentlig buffer tuning, kan du fjerne disse falske diagnoser og opnå optimal ydeevne.
Sidst opdateret: 2. februar 2026
Forfatter: Baud9600 Technical Team