Load Balancer Pool Distribution Analyzer
Load Balancer Pool Distribution Analyzer
Simulate how different load balancing algorithms distribute traffic across pool members. Model failure scenarios and compare redistribution efficiency.
Traffic Volume
Pool Members
Failure Scenario Simulation
How it works: Client IPs are sampled from a /16 subnet (10.0.0.0/16).
For source-based hashing, each unique client IP is deterministically mapped to a member.
Round robin ignores client IP and distributes sequentially. Weighted uses proportional
random assignment. Consistent hashing places both clients and members on a virtual ring.
Pool Members
| ID | IP:Port | Weight | Status | Actions |
|---|
Distribution Results
Session Distribution per Member
Failure Redistribution Analysis
Consistent Hash Ring
Reading the ring: Each member is placed at multiple points around the ring
(virtual nodes). A client IP is hashed to a position on the ring, then the request is sent
to the nearest member clockwise. Adding or removing a member only affects the arc of the
ring it owned — minimizing session disruption.
Algorithm Reference
Round Robin
Sessions are assigned sequentially: member 1, 2, 3, ..., N, 1, 2, ... Each member
receives exactly 1/N of all requests. Ignores server capacity and client affinity.
Simple and predictable but fails hard on heterogeneous hardware.
Source IP Hash
A hash of the client IP modulo the member count selects the backend. The same client
always reaches the same member — useful for stateful applications. Adding or removing
a member reshuffles all client mappings (N change to (N-1)/N of all sessions).
Weighted
Each member receives a share proportional to its weight relative to the total weight
pool. A member with weight=4 gets 4x the sessions of a weight=1 member. Used to model
heterogeneous backend capacity (e.g., a VM vs a bare-metal server).
Consistent Hashing
Members and clients are both mapped onto a circular ring via hashing. Each client goes
to the nearest member clockwise on the ring. Virtual nodes (replicas per member)
improve distribution uniformity. When a member fails, only its sessions move to the
next member on the ring — 1/N sessions disturbed vs. 100% for modulo hashing.