Debian Squeeze 802.3ad

Debian Lenny and Squeeze
2x Realtek 8169
1x Reaktek 8169, 1x nForce
D-Link DGS-1210-24 Rev. A

Testing:
iptraf
cat /proc/net/bonding/bond0

[1]
mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.


mode=1 (active-backup)
One slave interface is active at any time. If one interface fails, another interface takes over the MAC address and becomes the active interface. Provides fault tolerance only. Doesn’t require special switch support.


mode=2 (balance-xor)
Tranmissions are balanced across the slave interfaces based on ((source MAC) XOR (dest MAC)) modula slave count. The same slave is selected for each destination MAC. Provides load balancing and fault tolerance.


mode=3 (broadcast)
Transmits everything on all slave interfaces. Provides fault tolerance.


mode=4 (802.3ad)
This is classic IEEE 802.3ad Dynamic link aggregation. This requires 802.3ad support in the switch and driver support for retrieving the speed and duplex of each slave.


mode=5 (balance-tlb)
Adaptive Transmit Load Balancing. Incoming traffic is received on the active slave only, outgoing traffic is distributed according to the current load on each slave. Doesn’t require special switch support.


mode=6 (balance-alb)
Adaptive Load Balancing – provides both transmit load balancing (TLB) and receive load balancing for IPv4 via ARP negotiation. Doesn’t require special switch support, but does require the ability to change the MAC address of a device while it is open.


[2]
miimon
Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures. A value of zero disables MII link monitoring. A value of 100 is a good starting point. The use_carrier option, below, affects how the link state is determined. See the High Availability section for additional information. The default value is 0.


[3]
bond-downdelay 200 : Set the time, t0 200 milliseconds, to wait before disabling a slave after a link failure has been detected. This option is only valid for the bond-miimon.


bond-updelay 200 : Set the time, to 200 milliseconds, to wait before enabling a slave after a link recovery has been detected. This option is only valid for the bond-miimon.


[4]
J.A. Sullivan on the debian-user list writes:

There are a couple of issues in bonding which can bite the unsuspecting (as they did me!). Round robin will load balance across multiple
interfaces but can produce serious issues with managing out of order TCP
packets. Thus, the performance gain decreases dramatically with the
number of interfaces. In other words, 2 NICs in RR mode will not give 2x the performance nor 3 NICs 3x performance. I do not recall the exact
numbers off the top of my head but averages are something like:
2 NICs – 1.6x performance
3 NICs – 1.9x performance

The other modes (other than failover) eliminate the out of order TCP
problem but do so at a cost. All traffic for a single traffic flow goes
across a single path. The most common way to identify a single traffic
flow is matching source and destination MAC addresses. Some bonding algorithms allow matches on layer 3 or even layer 4 data but, if the switch through which they flow only supports MAC to MAC flow assignments, it will all devolve to matching MAC addresses anyway.

So what is the practical outcome using non-RR bonding? You have only one
combination of source and destination MAC address for each socket, e.g.,
if you are measuring a single FTP connection, there is only one
combination of source and destination MAC address. Thus, no matter how
many NICs you have, all the traffic will flow across one combination of
NICs. You will see no performance improvement.

In fact, depending on how the MAC addresses are advertised from the
systems with multiple NICs, all traffic between two systems may flow
across the same pair of NICs even if there are multiple, different
traffic streams.

On the other hand, if you are using bonding to provide a trunk carrying
traffic from many different source and destination MAC address
combinations, each separate stream will be limited to the maximum of the
individual NICs but the aggregate throughput should increase almost
linearly with the number of NICs. Hope that helps – John



Resources:
[1] http://www.howtoforge.com/nic-bonding-on-debian-lenny
[2] http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding
[3] http://www.cyberciti.biz/tips/debian-ubuntu-teaming-aggregating-multiple-network-connections.html
[4] http://comments.gmane.org/gmane.linux.debian.user/405553

One thought on “Debian Squeeze 802.3ad

  1. Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)

    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
    Transmit Hash Policy: layer2 (0)
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 200
    Down Delay (ms): 200

    802.3ad info
    LACP rate: slow
    Aggregator selection policy (ad_select): stable
    Active Aggregator Info:
    Aggregator ID: 2
    Number of ports: 2
    Actor Key: 17
    Partner Key: 1
    Partner Mac Address: 1c:7e:e5:21:c4:02

    Slave Interface: eth0
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 00:18:f8:0f:32:45
    Aggregator ID: 2

    Slave Interface: eth1
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 00:16:e6:d4:df:94
    Aggregator ID: 2

Comments are closed.