Monday, April 1, 2013

Ethernet bonding in Linux


Ethernet bonding in Linux

Bonding is nothing but integrating multiple ethernet interface in to a single one as bond0. This helps in performance improvement.

Steps for bonding in Fedora Core and Redhat Linux

Step 1.

Create the file ifcfg-bond0 with the IP address, netmask and gateway.  Below is my sample bond conf file.

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=192.168. 10.14
NETMASK=255. 255.255.0
GATEWAY=192. 168.10.30
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

Step 2.

Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above.

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Step 3.

Set the parameters for bond0 bonding kernel module. Add the following lines to /etc/modprobe. conf

# bonding commands
alias bond0 bonding
options bond0 mode=balance-alb miimon=100

Note: Here we configured the bonding mode as "balance-alb". All the available modes are given at the end and you should choose appropriate mode specific to your requirement.

Step 4.

Load the bond driver module from the command prompt.

$ modprobe bonding

Step 5.

Restart the network
$ service network restart

check the proc settings.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:13:72:80: 62:f0

Look at ifconfig -a and check that your bond0 interface is active. You are done!

RHEL bonding supports 7 possible "modes" for bonded interfaces. These modes determine the way in which traffic sent out of the bonded interface is actually dispersed over the real interfaces. Modes 0, 1, and 2 are by far the most commonly used among them.

* Mode 0 (balance-rr)
This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.

* Mode 1 (active-backup)
This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.

* Mode 2 (balance-xor)
Transmits based on XOR formula. (Source MAC address is XOR'd with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.

* Mode 3 (broadcast)
This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.

* Mode 4 (802.3ad)
This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.

* Mode 5 (balance-tlb)
This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.

* Mode 6 (balance-alb)
This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.
In this example, we'll manually set up a bonding interface and use round-robin load balancing to send traffc.


Load the bonding module with MII monitoring and round-robin load balancing: modprobe bonding mode=balance-rr miimon=100

Configure the bond0 NIC: ifconfig bond0 192.168.10.12 netmask 255.255.255.0 up

Add the 2 slaves: ifenslave bond0 eth0 and ifenslave bond0 eth1


That's it  Now the bind_addr for JGroups needs to be 192.168.10.12, and traffic will be load balanced between eth0 and eth1. When eth1 does down, only eth0 will be used, and when eth1 comes back up, it will be used again.

 Note that for IP multicasting to work, a route may have to be added to bond0:

route add -net 224.0.0.0 netmask 240.0.0.0 dev bond0


Tools like iptraf can now be used to monitor the physical NICs. We would see that traffic is split evenly across eth0 and eth1, and when a NIC is brought down, all traffic will be sent to the remaining NIC until the down NIC is brought up again.

No comments: