Nettverksingeniører møter ofte situasjoner der TCP-vindu eller applikasjonsytelse er skyldt på nettverksinfrastruktur. Etter å ha utført omfattende pakkefangst, tcpdumps og nettverksanalyse, oppdages den sanne flaskehalsen ofte: utmattet NIC (Network Interface Card) eller OS-nivåbuffere på klient- eller serversystemer.
Denne artikkelen gir både arv (circa 2009) og nåværende (2025-2026) bufferkonfigurasjoner for Linux, Windows og macOS, sammen med diagnostiske teknikker for å identifisere bufferutmattelse før det blir et kritisk problem.
TCP bruker en flytkontrollmekanisme der mottakeren annonserer enwindowvindusstørrelse" som indikerer hvor mye data den kan godta. Når systembuffere fylles opp, krymper dette vinduet til null, noe som tvinger avsenderen til å vente. Dette vises som et nettverksproblem, men er faktisk et vertsressursproblem.
# Check current TCP buffer settings sysctl net.ipv4.tcp_rmem sysctl net.ipv4.tcp_wmem sysctl net.core.rmem_max sysctl net.core.wmem_max # Check NIC ring buffer sizes ethtool -g eth0 # Monitor socket buffer usage ss -tm # Check for TCP zero window events tcpdump -i any 'tcp[tcpflags] & tcp-push != 0' -vv # Check network statistics for buffer issues netstat -s | grep -i "buffer\|queue\|drop"
# Check TCP parameters
netsh interface tcp show global
# View network adapter buffer settings
Get-NetAdapterAdvancedProperty -Name "Ethernet" | Where-Object {$_.DisplayName -like "*buffer*"}
# Monitor TCP statistics
netstat -s -p tcp
# Check receive window auto-tuning
netsh interface tcp show global | findstr "Receive Window"
# Check current buffer settings sysctl kern.ipc.maxsockbuf sysctl net.inet.tcp.sendspace sysctl net.inet.tcp.recvspace # View network statistics netstat -s -p tcp # Monitor socket buffers netstat -an -p tcp
| Parameter | Value (2009) | Beskrivelse |
|---|---|---|
| net.core.rmem default | 124928 (122KB) | Standard mottaksbufferstørrelse |
| net.core.rmem max | 131071 (128KB) | Maksimal mottaksbufferstørrelse |
| net.core.wmem default | 124928 (122KB) | Standard sende sokkel buffer størrelse |
| net.core.wmem max | 131071 (128KB) | Maksimal størrelse på sendekontaktbuffer |
| net.ipv4.tcp rmem | 4096 87380 174760 | TCP mottar buffer: min, standard, max (i bytes) |
| net.ipv4.tcp wmem | 4096 16384 131072 | TCP sende buffer: min, standard, max (i bytes) |
| net.ipv4.tcp mem | 196608 262144 393216 | TCP minnesider: lavt, trykk, høyt |
| net.core.netdev max backlog | 1000 | Maksimale pakker i inngangskø |
| net.core.optmem max | 10240 (10KB) | Maksimal tilleggsbufferstørrelse per stikkontakt |
| Parameter | Nåværende anbefalt verdi | Description |
|---|---|---|
| net.core.rmem_default | 16777216 (16 MB) | Default receive socket buffer size |
| net.core.rmem_max | 134217728 (128 MB) | Maximum receive socket buffer size |
| net.core.wmem_default | 16777216 (16MB) | Default send socket buffer size |
| net.core.wmem_max | 134217728 (128MB) | Maximum send socket buffer size |
| net.ipv4.tcp_rmem | 4096 87380 134217728 | TCP mottar buffer: min, standard, max (128 MB max) |
| net.ipv4.tcp_wmem | 4096 65536 134217728 | TCP sende buffer: min, standard, max (128 MB max) |
| net.ipv4.tcp_mem | 8388608 12582912 16777216 | TCP-minnesider: lavt, trykk, høyt (64GB-system) |
| net.core.netdev_max_backlog | 250000 | Maksimale pakker i inngangskøen (10GbE+) |
| net.core.optmem_max | 65536 (64KB) | Maximum ancillary buffer size per socket |
| net.ipv4.tcp congestion control | bbr | Bruk BBR-belastningskontroll (Googles algoritme) |
| net.ipv4.tcp window scaling | 1 | Aktiver TCP vinduskalering (RFC 1323) |
| net.ipv4.tcp timestamps | 1 | Aktiver TCP tidsstempler for bedre RTT-beregning |
| net.ipv4.tcp sack | 1 | Aktiver selektiv anerkjennelse |
| net.ipv4.tcp no metrics ave | 1 | Deaktiver caching av TCP metrics |
Legg disse innstillingene til /etc/sysctl.conf eller opprette en ny fil /etc/sysctl.d/99-network-tuning.conf:)
# Network Buffer Tuning for High-Performance Applications # Optimized for 10GbE+ networks with RTT up to 300ms # Core socket buffer settings net.core.rmem_default = 16777216 net.core.rmem_max = 134217728 net.core.wmem_default = 16777216 net.core.wmem_max = 134217728 # TCP buffer settings net.ipv4.tcp_rmem = 4096 87380 134217728 net.ipv4.tcp_wmem = 4096 65536 134217728 net.ipv4.tcp_mem = 8388608 12582912 16777216 # Device buffer settings net.core.netdev_max_backlog = 250000 net.core.netdev_budget = 50000 net.core.netdev_budget_usecs = 5000 net.core.optmem_max = 65536 # TCP optimizations net.ipv4.tcp_congestion_control = bbr net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_sack = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 # Apply with: sysctl -p /etc/sysctl.d/99-network-tuning.conf
# Check current ring buffer sizes ethtool -g eth0 # Set maximum ring buffer sizes (adjust based on NIC capabilities) ethtool -G eth0 rx 4096 tx 4096 # Make persistent by adding to /etc/network/interfaces or systemd service
| Parameter | Legacy Value (2009) | Beliggenhet |
|---|---|---|
| TcpWindowSize | 65535 (64KB) | Register: HKLM\System\KurrentControlSet\Tjenester\Tcpip\Parameter |
| Tcp1323Opters | 0 (aktiver) | Vindusskalering deaktivert som standard |
| StandardReceiveWindow | 8192 (8KB) | Standard mottakervindu |
| StandardSendWindow | 8192 (8KB) | Standard sendevindu |
| GlobalMaxTcpWindow | 65535 (64KB) | Maksimal TCP-vindustørrelse |
| TcpNumConnections | Slikken177714 | Maksimal TCP-tilkobling |
Moderne Windows bruker Motta Vindu Auto-Tuning funksjon, som dynamisk justerer mottar buffere basert på nettverksforhold.
| Funksjon | Gjeldende anbefalt innstilling | Description |
|---|---|---|
| Auto-tuning nivå | normal (eller svært eksperimentell for 10GbE+) | Dynamisk motta vindujustering |
| Motta-Side-skalering (RSS) | aktivert | Distribuer nettverksprosessering på tvers av prosessorer |
| Chimney Offload | automatisk (eller deaktivert på moderne NIC) | TCP offload til NIC maskinvare |
| NetDMA | deaktivert | Direkte minnetilgang (foreldret) |
| TCP Globale parametre | Se kommandoer under | Systemwide TCP-innstillinger |
| Congestion Provider | CUBIC (eller NewReno reserve) | TCP-belastningskontrollalgoritme |
# Check current auto-tuning level netsh interface tcp show global # Enable auto-tuning (normal mode - default for most scenarios) netsh interface tcp set global autotuninglevel=normal # For high-bandwidth, high-latency networks (10GbE+, data center environments) netsh interface tcp set global autotuninglevel=experimental # For conservative tuning (if experimental causes issues) netsh interface tcp set global autotuninglevel=restricted # For very conservative tuning (not recommended for high-performance networks) netsh interface tcp set global autotuninglevel=highlyrestricted # Enable CUBIC congestion provider (Windows Server 2022/Windows 11+ only) netsh interface tcp set supplemental template=Internet congestionprovider=cubic # Note: Windows 10 and Server 2019 use Compound TCP or NewReno by default # CUBIC is not available on these older versions # Enable Receive-Side Scaling (RSS) netsh interface tcp set global rss=enabled # Set chimney offload (automatic is recommended) netsh interface tcp set global chimney=automatic # Disable NetDMA (recommended for modern systems) netsh interface tcp set global netdma=disabled # Enable Direct Cache Access (if supported) netsh interface tcp set global dca=enabled # Enable ECN (Explicit Congestion Notification) netsh interface tcp set global ecncapability=enabled # Set initial congestion window to 10 (RFC 6928) netsh interface tcp set global initialRto=3000
# View current adapter settings Get-NetAdapterAdvancedProperty -Name "Ethernet" # Increase receive buffers (adjust based on NIC) Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Receive Buffers" -DisplayValue 2048 # Increase transmit buffers Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Transmit Buffers" -DisplayValue 2048 # Enable Jumbo Frames (if network supports it) Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Jumbo Packet" -DisplayValue 9014 # Enable Large Send Offload (LSO) Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Large Send Offload V2 (IPv4)" -DisplayValue Enabled Set-NetAdapterAdvancedProperty -Name "Ethernet" -DisplayName "Large Send Offload V2 (IPv6)" -DisplayValue Enabled
# These settings are typically NOT needed on Windows 10/11 due to auto-tuning # Only modify if auto-tuning is disabled or problematic # Registry path: HKLM\System\CurrentControlSet\Services\Tcpip\Parameters # Maximum TCP window size (if auto-tuning disabled) # TcpWindowSize = 16777216 (16MB) - REG_DWORD # Enable window scaling (enabled by default on modern Windows) # Tcp1323Opts = 3 - REG_DWORD # Number of TCP Timed Wait Delay # TcpTimedWaitDelay = 30 - REG_DWORD (default 240)
| Parameter | Legacy Value (2009) | Description |
|---|---|---|
| kern.ipc.maxsockbuf | 262144 (256KB) | Maksimal sokkelbufferstørrelse |
| net.inet.tcp.sendspace | 32768 (32KB) | Standard TCP- sendebuffer |
| net.inet.tcp.recvspace | 32768 (32KB) | Standard TCP mottaksbuffer |
| net.inet.tcp.autorcvbufmax | 131072 (128KB) | Maksimal automatisk tuned mottaksbuffer |
| net.inet.tcp.autosndbufmax | 131072 (128KB) | Maksimal automatisk utsendt sendebuffer |
| net.inet.tcp.rfc1323 | 0 (disabled) | TCP vinduskalering |
| Parameter | Current Recommended Value | Description |
|---|---|---|
| kern.ipc.maxsockbuf | 8388608 (8MB) | Maximum socket buffer size |
| net.inet.tcp.sendspace | 131072 (128KB) | Default TCP send buffer |
| net.inet.tcp.recvspace | 131072 (128KB) | Default TCP receive buffer |
| net.inet.tcp.autorcvbufmax | 16777216 (16MB) | Maximum auto-tuned receive buffer |
| net.inet.tcp.autosndbufmax | 16777216 (16MB) | Maximum auto-tuned send buffer |
| net.inet.tcp.rfc1323 | 1 (aktivert) | Aktiver TCP-vinduskalering |
| net.inet.tcp.sack | 1 (enabled) | Enable Selective Acknowledgment |
| net.inet.tcp.mssdflt | 1440 | Standard TCP maksimale segmentstørrelse |
| net.inet.tcp.delayed ack | 3 | Forsenet ACK-adferd |
# Check current settings sysctl kern.ipc.maxsockbuf sysctl net.inet.tcp.sendspace sysctl net.inet.tcp.recvspace sysctl net.inet.tcp.autorcvbufmax sysctl net.inet.tcp.autosndbufmax # Apply settings temporarily (until reboot) sudo sysctl -w kern.ipc.maxsockbuf=8388608 sudo sysctl -w net.inet.tcp.sendspace=131072 sudo sysctl -w net.inet.tcp.recvspace=131072 sudo sysctl -w net.inet.tcp.autorcvbufmax=16777216 sudo sysctl -w net.inet.tcp.autosndbufmax=16777216 sudo sysctl -w net.inet.tcp.rfc1323=1 sudo sysctl -w net.inet.tcp.sack=1 # Make settings persistent (create /etc/sysctl.conf) sudo tee /etc/sysctl.conf <Opprette en lanseringDaemon for vedvarende innstillinger
# Create /Library/LaunchDaemons/com.local.sysctl.plist sudo tee /Library/LaunchDaemons/com.local.sysctl.plist <EOF sudo chmod 644 /Library/LaunchDaemons/com.local.sysctl.plist sudo launchctl load /Library/LaunchDaemons/com.local.sysctl.plist Label com.local.sysctl ProgramArguments /usr/sbin/sysctl -w kern.ipc.maxsockbuf=8388608 RunAtLoad Advarsel: macOS Ventura (13) og senere har System Integrity Protection (SIP) restriksjoner. Noen kjerneparametre kan ikke endres selv med sudo. Testinnstillinger i ditt spesifikke miljø.
# Server side iperf3 -s # Client side - test TCP throughput iperf3 -c server_ip -t 60 -i 5 -w 16M # Test with multiple parallel streams iperf3 -c server_ip -P 10 -t 60 # Test UDP performance iperf3 -c server_ip -u -b 1000M -t 60
# Capture and display TCP window sizes tcpdump -i any -n 'tcp' -vv | grep -i window # Save capture for Wireshark analysis tcpdump -i any -w /tmp/capture.pcap 'tcp port 443'
Se etter disse indikatorene for bufferproblemer:
# Linux - Monitor network buffer statistics watch -n 1 'cat /proc/net/sockstat' watch -n 1 'ss -tm | grep -i mem' # Check for drops netstat -s | grep -i drop # Windows - Monitor TCP statistics netstat -e 1 # macOS - Monitor network statistics netstat -s -p tcp
For å bestemme optimale bufferstørrelser for nettverket ditt, beregne Bandwidth-Delay produktet:
BDP = Bandwidth (bits/sec) × RTT (seconds) Example for 10 Gigabit Ethernet with 50ms RTT: BDP = 10,000,000,000 × 0.050 = 500,000,000 bits = 62.5 MB Buffer Size = BDP × 2 (for bidirectional traffic and headroom) Buffer Size = 62.5 MB × 2 = 125 MB This is why modern settings recommend 128MB maximum buffers.
| Type arbeidslast | Anbefalt bufferstørrelse | Nøkkelparametere |
|---|---|---|
| Web Server (Low latency) | 4-16 MB | Lavere buffere, flere tilkoblinger, rask respons |
| Databaseserver | 16-32 MB | Moderate buffere, konsistent gjennomstrømning |
| Filoverføring / sikkerhetskopi | 64-128 MB | Maksimal buffer, høy gjennomstrømsprioritet |
| Video Streaming | 32-64 MB | Store buffere, konsekvent leveringsrate |
| HPC/datasenter | 128-256 MB | Maksimal buffer, spesialisert overbelastningskontroll |
| Trådløs / mobil | 2-8 MB | konservative buffere, variabel latenshåndtering |
Bufferutmattelse er en vanlig årsak til ytelsesproblemer som synes å være nettverksrelaterte. Ved å forstå utviklingen av bufferstørrelse fra 2009s 128KB grenser til dagens 128MB evner, kan nettverksingeniører raskt identifisere og løse disse problemene.
Nøkkeltakeaways:
Husk: Etnetnettverksproblem" avslørt ved pakkeanalyse for å vise TCP-nullvinduer er faktisk et vertssystemressursproblem. Med riktig buffer tuning kan du eliminere disse falske diagnosene og oppnå optimal ytelse.
Sist oppdatert: 2. februar 2026
Forfatter: Baud9600 Tekniske team