Channel Bondind in Linux
Channel Bondind in Linux
redaix.com/2020/04/creating-bonding-or-teaming-in-linux.html
There are almost six types of Channel Bond types are available. Here, we’ll review only two
type of Channel Bond which are popular and widely used.
Active-Backup : Only one slave NIC is active at any given point of time. Other Interface Card
will be active only if the active slave NIC fails.
We have two Network Ethernet Cards i.e eth1 and eth2 where bond0 will be created for
bonding purpose. Need superuser privileged to execute below commands.
Configure eth1
Mention parameter MASTER bond0 and eth1 interface as a SLAVE in config file as shown
below.
1/14
# vi /etc/sysconfig/network-scripts/
ifcfg-eth1
DEVICE="eth1"
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
USERCTL=no
MASTER=bond0
SLAVE=yes
Configure eth2
Here also, specify parameter MASTER bond0 and eth2 interface as a SLAVE.
# vi /etc/sysconfig/network-scripts/
ifcfg-eth2
DEVICE="eth2"
TYPE="Ethernet"
ONBOOT="yes"
USERCTL=no
#NM_CONTROLLED=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.246.130
2/14
NETMASK=255.255.255.0
BONDING_OPTS="mode=0 miimon=100"
In the above configuration we have chosen Bonding Options mode=0 i.e Round-Robin and
miimon=100 (Polling intervals 100 ms).
Let’s see interfaces created using ifconfig command which shows “bond0” running as the
MASTER both interfaces “eth1” and “eth2” running as SLAVES.
# ifconfig
# watch -n .1
cat /proc/net/bonding/bond0
Sample Output
Below output shows that Bonding Mode is Load Balancing (RR) and eth1 & eth2 are
showing up.
4/14
Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 2
Permanent HW addr: 00:0c:29:57:61:98
Slave queue ID: 0
------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------
In this scenario, Slave interfaces remain same. only one change will be there in the bond
interface ifcfg-bond0 instead of ‘0‘ it will be ‘1‘ which is shown as under.
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.246.130
NETMASK=255.255.255.0
BONDING_OPTS="mode=1 miimon=100"
# watch -n .1
cat /proc/net/bonding/bond0
5/14
Sample Output
M anually down and up the Slave Interfaces to check the working of Channel Bonding.
Please see the command as below.
6/14
All the system admin would like to avoid server outage by having redundancy for root file
system using mirroring, Multiple FC links to SAN with help of multi-pathing and many more.
So here the question is how do you provide redundancy in network level? Having a
multiple network card will not give any redundancy. In red hat Linux you need to configure
bonding to accomplish the network level redundancy. Once you have configured
the bonding/teaming by using two NIC cards, kernel will automatically detect the failure of
any NIC and work smartly according to that without any riot. Bonding can be used for load
sharing as well between two physical links.
[root@mylinz2 network-scripts]#
Goal:
Configure bonding between eth2 and eth4 with name of bond0.
Step 1:
# cat /etc/modprobe.d/bonding.conf
Step 2:
7/14
Now time to create a bonding interface configuration file in /etc/sysconfig/network-scripts/
directory like the below one.
/etc/sysconfig/network-scripts
DEVICE=bond0
IPADDR=192.168.10.25
NETMASK=255.255.255.0
USRCTL=no
ONBOOT=yes
BOOTPRO=none
BONDING_OPTS="mode=0 miimon=100"
[root@mylinz2 network-scripts]#
Step:3
DEVICE=eth2
HWADDR=00:0C:29:79:17:FA
BOOTPRO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
DEVICE=eth4
8/14
HWADDR=00:0C:29:79:17:04
BOOTPRO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
Note:Do not copy paste the content from above output.MAC and DEVICE name will differ
for each system.
Step:4
Note:Do not restart the network service without server maintenance window.
[ OK ]
[ OK ]
[root@mylinz2 ~]#
Step:5
collisions:0 txqueuelen:0
collisions:0 txqueuelen:1000
collisions:0 txqueuelen:1000
In the above output,you can see NIC eth2 and eth4 have flag “SLAVE” and interface “bond0″
has flag MASTER.Another thing you note both, the NIC interface will show same MAC
address.
Step:6
10/14
Performing the live test to ensure bonding is providing the fault tolerance.
First i am removing the LAN cable from eth4 and let see what happens.
collisions:0 txqueuelen:0
collisions:0 txqueuelen:1000
collisions:0 txqueuelen:1000
11/14
Still the bond0 interface is UP and RUNNING fine.At the same time,”RUNNING” flag has
disappear from eth4.
Now i have connected LAN cable back to eth4 and pulling out from eth2.
collisions:0 txqueuelen:0
collisions:0 txqueuelen:1000
collisions:0 txqueuelen:1000
12/14
Still “bond0″ interface running with UP & RUNNING flag.So you have successfully configured
bonding on Redhat Linux 6.
MII Status: up
Up Delay (ms): 0
MII Status: up
Duplex: full
MII Status: up
Duplex: full
balance-rr 0
You can modify the bonding mode by editing “mode” in the ifcfg-bond0 configuration file.
BONDING_OPTS="mode=0 miimon=100"
Policy Details
bond0
14/14