前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Configure network bonding on RHEL (Red Hat Enterprise Linux)

Configure network bonding on RHEL (Red Hat Enterprise Linux)

作者头像
西湖醋鱼
发布2020-12-30 16:33:14
1.5K0
发布2020-12-30 16:33:14

Question:

      Recently I have to use the RHEL and need to config the network with a few NICs. Here comes the question: What's the network bonding and How to bond? So I write this post. 

What's network bonding?

      Network bonding is a method of combining (joining) two or more network interfaces together into a single interface. It will increase the network throughput, bandwidth and will give redundancy. If one interface is down or unplugged, the other one will keep the network traffic up and alive. Network bonding can be used in situations wherever you need redundancy, fault tolerance or load balancing networks.

Linux allows us to bond multiple network interfaces into single interface using a special kernel module named bonding. The Linux bonding driver provides a method for combining multiple network interfaces into a single logical “bonded” interface. The behaviour of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring, may be performed.

Types of network Bonding

According the to the official documentation, here is the types of network bonding modes.

mode=0 (balance-rr)

Round-robin policy: It the default mode. It transmits packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)

Active-backup policy: In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.

mode=2 (balance-xor)

XOR policy: Transmit based on [(source MAC address XOR’d with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

Prerequisites:

– Ethtool support in the base drivers for retrieving the speed and duplex of each slave. – A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)

Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Prerequisite:

– Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb)

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

 Setting up network Bonding on RHEL

Config the network bonding (mode 1 )

shutdown the nm (if you don't shoudown the nm, you can use nmcli con reload to make the nm reload the config file)

代码语言:javascript
复制
systemctl stop NetworkManager.service     
systemctl disable NetworkManager.service

check the mod

代码语言:javascript
复制
modprobe --first-time bonding
代码语言:javascript
复制
lsmod | grep bonding

create the bond0 interface file 

代码语言:javascript
复制
vim /etc/sysconfig/network-scripts/ifcfg-bond0
代码语言:javascript
复制
TYPE=Bond
BOOTPROTO=dhcp
NAME=bond0
DEVICE=bond0
ONBOOT=yes
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100"
#IPADDR=10.73.73.21
#PREFIX=24

fix the two files - ifcfg-eno1 ; ifcfg-eno2

代码语言:javascript
复制
[root@hp-dl320eg8-16 network-scripts]# cat ifcfg-eno1 
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eno1
UUID=fa3a6d8b-2000-4995-a6e4-c93cf3480ac1
DEVICE=eno1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
代码语言:javascript
复制
[root@hp-dl320eg8-16 network-scripts]# cat ifcfg-eno2
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=xuyaowen
UUID=3624711a-f96d-40cb-9b06-0f10031c0895
DEVICE=eno2
ONBOOT=yes
MASTER=bond0
SLAVE=yes

restart the network

代码语言:javascript
复制
systemctl restart network

check the bond0 status

代码语言:javascript
复制
cat /proc/net/bonding/bond0
代码语言:javascript
复制
[root@hp-dl320eg8-16 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eno1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 64:51:06:0d:fb:78
Slave queue ID: 0

Slave Interface: eno2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 64:51:06:0d:fb:79
Slave queue ID: 0

As you see in the above output, the bond0 interface is up and running and it is configured as active-backup(mode1) mode. In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails.

finished!

some command about net:

ifup ifcfg-bond0

ifdown ifcfg-bond0

to make the interface up/down

if the nm(network manager) is up:

use nmcli con reload to let the nm to know the changes.

ip addr show to show the addr

References:

  1. Linux Basics: Create Network Bonding On CentOS 7/6.5
  2. RHEL 7 Networking Guide
  3. 多网卡的7种bond模式原理
  4. linux下网卡bonding配置

保持更新,转载请注明出处。 

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2018-07-12 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • mode=0 (balance-rr)
  • mode=1 (active-backup)
  • mode=2 (balance-xor)
  • mode=3 (broadcast)
  • mode=4 (802.3ad)
  • mode=5 (balance-tlb)
  • mode=6 (balance-alb)
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档