Debian Lenny and Squeeze
2x Realtek 8169
1x Reaktek 8169, 1x nForce
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
One slave interface is active at any time. If one interface fails, another interface takes over the MAC address and becomes the active interface. Provides fault tolerance only. Doesn’t require special switch support.
Tranmissions are balanced across the slave interfaces based on ((source MAC) XOR (dest MAC)) modula slave count. The same slave is selected for each destination MAC. Provides load balancing and fault tolerance.
Transmits everything on all slave interfaces. Provides fault tolerance.
This is classic IEEE 802.3ad Dynamic link aggregation. This requires 802.3ad support in the switch and driver support for retrieving the speed and duplex of each slave.
Adaptive Transmit Load Balancing. Incoming traffic is received on the active slave only, outgoing traffic is distributed according to the current load on each slave. Doesn’t require special switch support.
Adaptive Load Balancing – provides both transmit load balancing (TLB) and receive load balancing for IPv4 via ARP negotiation. Doesn’t require special switch support, but does require the ability to change the MAC address of a device while it is open.
Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures. A value of zero disables MII link monitoring. A value of 100 is a good starting point. The use_carrier option, below, affects how the link state is determined. See the High Availability section for additional information. The default value is 0.
bond-downdelay 200 : Set the time, t0 200 milliseconds, to wait before disabling a slave after a link failure has been detected. This option is only valid for the bond-miimon.
bond-updelay 200 : Set the time, to 200 milliseconds, to wait before enabling a slave after a link recovery has been detected. This option is only valid for the bond-miimon.
J.A. Sullivan on the debian-user list writes:
There are a couple of issues in bonding which can bite the unsuspecting (as they did me!). Round robin will load balance across multiple
interfaces but can produce serious issues with managing out of order TCP
packets. Thus, the performance gain decreases dramatically with the
number of interfaces. In other words, 2 NICs in RR mode will not give 2x the performance nor 3 NICs 3x performance. I do not recall the exact
numbers off the top of my head but averages are something like:
2 NICs – 1.6x performance
3 NICs – 1.9x performance
The other modes (other than failover) eliminate the out of order TCP
problem but do so at a cost. All traffic for a single traffic flow goes
across a single path. The most common way to identify a single traffic
flow is matching source and destination MAC addresses. Some bonding algorithms allow matches on layer 3 or even layer 4 data but, if the switch through which they flow only supports MAC to MAC flow assignments, it will all devolve to matching MAC addresses anyway.
So what is the practical outcome using non-RR bonding? You have only one
combination of source and destination MAC address for each socket, e.g.,
if you are measuring a single FTP connection, there is only one
combination of source and destination MAC address. Thus, no matter how
many NICs you have, all the traffic will flow across one combination of
NICs. You will see no performance improvement.
In fact, depending on how the MAC addresses are advertised from the
systems with multiple NICs, all traffic between two systems may flow
across the same pair of NICs even if there are multiple, different
On the other hand, if you are using bonding to provide a trunk carrying
traffic from many different source and destination MAC address
combinations, each separate stream will be limited to the maximum of the
individual NICs but the aggregate throughput should increase almost
linearly with the number of NICs. Hope that helps – John