Ask the Community
Groups
Appliance - Bonding the NIC - Connect IT Community | Kaseya
<main> <article class="userContent"> <h2 data-id="summary"><strong>SUMMARY</strong></h2> <p>This article explains how to bond NICs on Recovery Series appliances.</p> <h2 data-id="issue"><strong>ISSUE</strong></h2> <p>Customers who would like to ensure networking is redundant for their Unitrends appliances typically request information about NIC Bonding options for Rack Series appliances. This KB provides the information required to provide redundant networking connectivity. <br><br><strong>Notes</strong>: LACP link aggregation will <em>not</em> inherently allow more throughput to the appliance nor increase threading. Bonding is LACP Mode 4 fail-over aggregation, and the primary intent of bonding NICs in a Unitrends Appliance is to prevent the interruption of backup or restore operations should a switch, NIC, cable, or port become disrupted, but will not increase performance. To create a bond not only requires software configuration in a Unitrends appliance, but proper switch configuration must also be provided. Many switch vendors do not support bonding, require a 2 switch minimum configuration, or require additional software licensing for the switch to support. Before attempting bonding, confirm your switch is properly configured to fail over. Any bond configuration should also be tested at times when backups are not running to ensure failover functions as expected. </p> <p>If you are seeking to run backups faster, 10G solutions will "in theory" allow faster aggregate speeds, but please note that individual backup performance is limited by client performance and switch overhead and real speeds in excess of 10gbit is rarely ever seen outside of theoretical testing. Most windows systems will not be able to saturate 1gbit, let alone 10. Aggregate speeds beyond 10 gigabit are only attained by running multiple concurrent jobs. It typically takes protection of several concurrent high performance systems or Virtual Nodes to even exceed a physical connection. The intent of bonding thus is not expected to increase performance, but only to add network resiliency. <br><br>Please note, connecting multiple NICs in the same VLAN on different IPs is not a bond. Never connect multiple NICs to the same routable VLAN unless using bonding. A Bond is a software and switch configuration that provides a <strong>single</strong> IP and a single virtual MAC address seen across multiple adapters as a single object. By connecting multiple NICs to the same VLAN without using bonding, you create unsupported TCP/IP configurations most switches will not support, and this is a configuration Linux itself does not support. Connecting several adapters in one subnet improperly can lead to ARP casting storms, severe performance degradation of your entire network, connection instability, MAC confusion in switch infrastructure, and disconnection of services. If Unitrends Staff ID's that you have more than 1 NIC in the same VLAN without using bonding they will ask that the redundant NICS be disabled unless a true bonding configuration is possible in your environment. When using multiple adapters properly, it is important each be connected to independent non-routable VLANs, with a gateway configured on only one VLAN. <br><br></p> <h2 data-id="resolution"><strong>RESOLUTION</strong></h2> <table><tbody><tr><td colspan="1" rowspan="1"> <table border="0" cellpadding="0" style="border-spacing: 0px;"><tbody><tr><td colspan="1" rowspan="1"> <p><br>UEB virtual systems do not support bonding, bonding should be done at the host level not the guest level. </p> </td> </tr></tbody></table></td> </tr></tbody></table><p> </p> <p><strong>The Default NIC in a hardware appliance can never be part of a software NIC bond as it is already hardware bonded to the hardware IPMI or iDrac port. This means the 1st 2 NICs in any appliance can never be part of a bond together. </strong></p> <p><br><strong>Unitrends supports bonding NICs of the same type only. Either 2 onboard NICs, or 2 SFP ports, and all NICs in a bond need to run at the same speed. Never bond a mix of onboard and SFP, or 1G and 10G in a single bond. </strong></p> <p><br>Understanding the above, if a bond is still appropriate to ensure network reliability in the event if a single link disruption, to configure bonding on a Unitrends backup appliance, use /usr/bp/bin/cmc_bonding:</p> <pre class="code codeBlock" spellcheck="false" tabindex="0">Usage:cmc_bonding [args] action args ---------- -------------------------------------------------------------- create bond-name mode miimon ipaddress gateway netmask slavesX...slavesY - Creates bonding device . Must have Min of 2 slaves, Max of 5 destroy bond-name - Destroys bonding device add bond-name slavesX...slavesY - Adds slaves to existing . Must have Min of 1 slave, Max of 3 remove bond-name slavesX...slavesY - Removes slaves from existing . Must have Min of 1 slave, Max of 3 mode bond-name mode - Changes bonding mode of device view_config bond-name - Prints bonding device 's configuration list_slaves bond-name - Shows list of registered slaves to list_bonds - Shows list of created bonding devices</pre> <p><br>When you run ifconfig and route, you should see the bond0 master interface and eth1, eth2 and eth3 as slaves. The default route should be on bond0. To ensure data routes over the bond0 interface, remove the gateway for eth0 using the Rapid Recovery Console (RRC) and restart the network:<br> </p> <pre class="code codeBlock" spellcheck="false" tabindex="0">[root@Recovery-722 ~]# service network restart Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ]</pre> <p> </p> <p>The cmc_bonding utility is not persistent across reboots. To make it persistent, add the cmc_bonding command, with absolute path, to the /etc/rc.local:<br> </p> <pre class="code codeBlock" spellcheck="false" tabindex="0">[root@Recovery-722 ~]# vim /etc/rc.local</pre> <p>When you are done, the file will look like:</p> <pre class="code codeBlock" spellcheck="false" tabindex="0">#!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. /usr/bp/bin/cmc_bonding create bond0 4 100 192.168.101.35 192.168.101.3 255.255.255.0 eth1 eth2 eth3 touch /var/lock/subsys/local</pre> <p> </p> <h3 data-id="bonding-attributes">Bonding Attributes</h3> <p>Miimon:</p> <p>Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures. A value of zero disables MII link monitoring. A value of 100 is a good starting point. Discuss other optional values for this setting with your switch vendor. <br><br>You should only be using Mode 0 or Mode 1 for 10 Gbit bonding.</p> <p>Mode:</p> <p>Specifies the kind of protocol used by bond driver for its slaves: The following information is provided for reference only. </p> <ul><li>Mode 0 (balance-rr): This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.</li> <li>Mode 1 (active-backup): This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.</li> </ul><p>On CentOS7 appliances you may receive an error "*interface*is not allowed to be a slave device. This will invalidate your Unitrends license." If you receive this, this is because of a mismatch between the interface name schema in the OS and what the bonding script is looking for. In these case you will want to edit the /usr/bp/bin/cmc_bonding script to comment out the following lines:<br><br># eth0 or ens32?<br> #if0=`ls ${NET_CFG}* |grep -E 'ifcfg-eth|ifcfg-en' |sort |head -n1|sed -e 's/.*ifcfg-//'`<br> #if [ "$SLAVE" = "$if0" ]; then<br> # echo "$if0 is not allowed to be a slave device. This will invalidate your Unitrends license." >&2<br> # return 1<br> #fi<br><br><br>This should allow you to proceed with mode 0 or 1 bonding for 10GB adapters on CentOS7 appliances<br> </p> <h2 data-id="notes"><strong>NOTES</strong></h2> <p>This only applies to backup appliances with 3 or more on board NICs, or systems that have the 4-port PCI NIC card added or 2 port SFP card added. The appliance's default NIC must NEVER be part of a NIC Bond. When bonding other NICS, it is recommended that the default NIC and IPMI or iDrac port both remain connected and online as the license key is bound to this NIC and should not be bound to a software bond. The default NIC should be connected to a different VLAN than the production NICs, or, have it;s gateway removed and be assigned a non-used IP in a non-used network segment. (aka, make up a fake IP that is not in conflict with any used internal IP range). This would allow easy return to using the original NIC should the bond fail to operate and not create licensing system conflicts. </p> <p>Bonding in Linux is not enough alone to assure network failover. It is also critical to ensure your network <br><br></p> </article> </main>