By enabling and configuring any of the BIG-IP® advanced routing modules, you can configure dynamic routing on the BIG-IP system. You enable one or more advanced routing modules, as well as the Bidirectional Forwarding Detection (BFD) protocol, on a per-route-domain basis. Advanced routing module configuration on the BIG-IP system provides these functions:
The BIG-IP® advanced routing modules support these protocols.
|Protocol Name||Description||Daemon||IP version supported|
|BFD||Bidirectional Forwarding Detection is a protocol that detects faults between two forwarding engines connected by a link. On the BIG-IP system, you can enable the BFD protocol for the OSPFv2, BGP4, and IS-IS dynamic routing protocols specifically.||oamd||IPv4 and IPv6|
|BGP4||Border Gateway Protocol (BGP) with multi-protocol extension is a dynamic routing protocol for external networks that supports the IPv4 and IPv6 addressing formats.||bgpd||IPv4 and IPv6|
|IS-IS||Intermediate System-to-Intermediate System (IS-IS) is a dynamic routing protocol for internal networks, based on a link-state algorithm.||isisd||IPv4 and IPv6|
|OSPFv2||The Open Shortest Path First (OSPF) protocol is a dynamic routing protocol for internal networks, based on a link-state algorithm.||ospfd||IPv4|
|OSPFv3||The OSPFv3 protocol is an enhanced version of OSPFv2.||ospf6d||IPv6|
|PIM||The Protocol Independent Multicast (PIM) protocol is a dynamic routing protocol for multicast packets from a server to all interested clients.||pimd||IPv4 and IPv6|
|RIPv1/RIPv2||Routing Information Protocol (RIP) is a dynamic routing protocol for internal networks, based on a distance-vector algorithm (number of hops).||ripd||IPv4|
|RIPng||The RIPng protocol is an enhanced version of RIPv2.||ripngd||IPv6|
Bidirectional Forwarding Detection (BFD) is an industry-standard network protocol on the BIG-IP® system that provides a common service to the dynamic routing protocols BGPv4, OSPFv2, and IS-IS. Enabled on a per-route domain basis, BFD identifies changes to the connectivity between two forwarding engines, or endpoints, by transmitting periodic BFD control packets on each path between the two endpoints. When either endpoint fails to receive these control packets for a specific duration of time, the connectivity between the endpoints is considered lost, and BFD notifies the associated dynamic routing protocols. In general, BFD detects connectivity changes more rapidly than the endpoints' standard Hello mechanisms, leading to quicker network convergence, which is highly desirable to data center applications.
BFD operates by establishing a session between two endpoints, sending BFD control packets over the link. If more than one link exists between two endpoints, BFD can establish multiple sessions to monitor each link.
A BFD session can operate in one of two modes, either asynchronous mode or demand mode:
The first step in configuring the Bidirectional Forwarding Detection (BFD) protocol on the BIG-IP® system is to use the IMI Shell within tmsh to configure the protocol for the relevant advanced routing modules (BGP4, OSPFv2, and IS-IS):
After configuring BFD protocol behavior, you enable the protocol on one or more specific route domains.
You must enable the Bidirectional Forwarding Detection (BFD) network protocol on a per-route domain basis. Use this task to enable BFD on an existing route domain.
There are two common BFD commands that you can use to perform BFD base configuration. To use these commands, you use the IMI Shell within tmsh.
|Sample command line sequence||Result|
|bigip (config-if)# bfd interval 100 minrx 200 multiplier 4||Sets desired Min Tx, required Min Rx, and detect Multiplier.|
|bigip (config)# bfd slow-timer 2000||Sets BFD slow timer to two seconds.|
There are a number of common BFD commands that you can use to perform BFD routing configuration. To use these commands, you use the IMI Shell within tmsh.
|Protocol||Sample command line sequence||Result|
|BGP4||bigip (config-if)# neighbor 188.8.131.52 fallover bfd multihop||Enables multi-hop bidirectional forwarding detection to BGP neighbor 184.108.40.206.|
|OSPFv2||bigip (config)# bfd all-interfaces||Enables single-hop bidirectional forwarding detection for all OSPF neighbors.|
|OSPFv2||bigip (config)# area 1 virtual-link 220.127.116.11 fallover bfd||Enables multi-hop bidirectional forwarding detection to OSPF router 18.104.22.168.|
|IS-IS||bigip (config-if)# bfd all-interfaces||Enables bidirectional forwarding detection for all IS-IS neighbors.|
The Protocol Independent Multicast (PIM) protocol (which is available with licensed ZebOS® dynamic routing on the BIG-IP® system) comprises a group of carrier-class multicast routing protocols for the distribution of data. Within this group, the BIG-IP system supports the PIM Dense Mode (PIM-DM) and PIM Sparse Mode (PIM-SM) protocols, which provide dense mode, sparse mode, and sparse-dense mode multicast routing. Sparse-dense mode only applies to IPv4 and is proprietary to Cisco®.
You can configure a BIG-IP® system non-default route domain to use the Protocol Independent Multicast (PIM) protocol sparse mode (SM), which is available with a Multicast Routing Bundle license in addition to licensed ZebOS® dynamic routing. This configuration provides an effective multicast solution for Wide Area Networks (WANs) with sparsely distributed groups.
You can configure a BIG-IP® system non-default route domain to use the Protocol Independent Multicast (PIM) protocol dense mode (DM), which is available with a Multicast Routing Bundle license in addition to licensed ZebOS® dynamic routing. This configuration provides an effective multicast solution for Local Area Networks (LANs) that are configured with listeners at most locations.
For example, in a PIM-DM configuration, you can configure the Local Traffic Manager™ (LTM) to use a wildcard forwarding virtual server to flood IPv4 or IPv6 multicast traffic throughout the domain. Interested listeners, located downstream from a PIM router, use the Internet Group Management Protocol (IGMPv3) protocol for IPv4 traffic or Multicast Listener Discovery (MLDv2) protocol for IPv6 traffic to join the multicast group. When the LTM® receives traffic, it sends that traffic to all active PIM routers, which then forward the packets toward the interested listeners. PIM routers without interested listeners prune themselves from the multicast group. When a listener leaves the multicast group, the upstream router sends a PIM prune message to the LTM, which removes that router from the group's forwarding list, thus enabling the LTM to send group traffic only to active PIM routers, and use the shortest-path trees.
Additionally, a PIM-DM configuration applies reverse path forwarding functionality, using the unicast routing information base (RIB), to provide loop-free forwarding of multicast traffic.
The IPv4 Protocol Independent Multicast (PIM) Sparse-Dense protocol is a Cisco proprietary protocol. Combining PIM dense mode (DM) and sparse mode (SM) into a single process offers simplicity, maintainability, and better performance. Certain groups can be explicitly configured for DM, while leaving the rest operating in SM.
The Protocol Independent Multicast (PIM) protocol dense mode (DM) configuration applies reverse path forwarding (RPF) functionality, which uses a unicast routing information base (RIB) to provide loop-free forwarding of multicast traffic.
For example, an RPF router is located on an interface that sends unicast packets to the source. When a multicast packet arrives on an RPF router, the router forwards the packet to the interfaces specified by the unicast RIB. If a multicast packet arrives on a non-RPF interface, the packet is discarded, thus preventing a loop condition.
Maximum Multicast Rate functionality provides a packets-per-second rate limit intended for non-broadcast multicast functionality. You can disable this functionality to optimize performance and throughput for multicast functionality. Click Maximum Multicast Rate check box., and then clear the
The first step in configuring Protocol Independent Multicast (PIM) protocol on the BIG-IP® system is to use the IMI Shell within tmsh to configure the protocol for the relevant advanced routing module (PIM).
After configuring PIM protocol behavior, you enable the protocol on one or more specific route domains.
There are several commands that you can use to enable multicast routing. To use these commands, you access the IMI Shell within tmsh.
|Sample command line sequence||Result|
|bigip (config-if)>en||Enables multicast functionality.|
|bigip (config-if)#conf t||Enables configuration functionality.|
|bigip (config-if)#ip pim dense-mode||Enables IPv4 PIM dense-mode functionality.|
|bigip (config-if)#ipv6 pim dense-mode||Enables IPv6 PIM dense-mode functionality.|
|bigip (config-if)#ip pim sparse-mode||Enables IPv4 PIM sparse-mode functionality.|
|bigip (config-if)#ipv6 pim sparse-mode||Enables IPv6 PIM sparse-mode functionality.|
|bigip (config-if)#ip pim sparse-dense-mode||Enables IPv4 PIM sparse-dense-mode functionality.|
|bigip (config-if)#ipv6 pim sparse-dense-mode||Enables IPv6 PIM sparse-dense-mode functionality.|
There are a number of common PIM commands that you can use to perform PIM routing configuration. To use these commands, you access the IMI Shell within tmsh.
|Sample command line sequence||Result|
|bigip (config-if)#interface external||Specifies configuration of external port.|
|bigip (config-if)#ip pim dense-mode||Configures PIM dense mode for external port.|
|bigip (config)#ip pim dense-group||Configures a particular multicast group to function in dense-mode while all other groups function in sparse mode.|
There are a number of common tmsh commands that you can use to assess Protocol Independent Multicast (PIM) interfaces.
|bigip (config-if)# tmsh show net mroute||Lists multicast sources, groups, incoming interfaces, and outgoing interfaces.|
|bigip (config-if)# tmsh show sys ip-stat | grep Multicast||Summarizes multicast statistics.|
Protocol Independent Multicast (PIM) protocol functionality, available with licensed ZebOS® dynamic routing, supports several types of tunnels.
Some of the advanced routing modules on the BIG-IP® system include support for Equal Cost Multipath (ECMP) routing. ECMP is a forwarding mechanism for routing a traffic flow along multiple paths of equal cost, with the goal of achieving equally-distributed link load sharing. By load balancing traffic over multiple paths, ECMP offers potential increases in bandwidth, as well as some level of fault tolerance when a path on the network becomes unavailable.
The BIG-IP® system deploys Equal Cost Multipath (ECMP) routing with these advanced routing modules:
The ECMP protocol is enabled by default for all of these advanced routing modules except BGP4. For BGP4, you must explicitly enable the ECMP forwarding mechanism.
You can enable the Equal Cost Multipath (ECMP) forwarding mechanism for the BGP4 advanced routing module, using the Traffic Management Shell (tmsh) command line interface. When you enable ECMP for BGP4, the BIG-IP® system provides multiple paths for a traffic flow to choose from, in order to reach the destination.
When you enable advanced routing modules for a route domain, the BIG-IP system creates a dynamic routing startup configuration. Each route domain has its own dynamic routing configuration, located in the folder /config/zebos/rdn , where n is the numeric route domain ID.
Perform this task when you want to use IMI Shell (imish) to configure any of the dynamic routing protocols. Note that if you are using the route domains feature, you must specify the route domain pertaining to the dynamic routing protocol that you want to configure.
For each route domain on the BIG-IP system (including route domain 0), you can enable one or more dynamic routing protocols, as well as the network protocol Bidirectional Forwarding Detection (BFD). For example, you can enable BGP4 and OSPFv3 on a specific route domain. Use of dynamic routing protocols for a route domain is optional.
When you enable dynamic routing on a specific route domain, the BIG-IP system creates a dynamic routing instance. This dynamic routing instance is made up of the core dynamic routing daemons (imi and nsm), as well each relevant dynamic routing protocol daemon. If you enable BFD, the BFD instance is made up of the oamd protocol daemon. Thus, each dynamic routing instance for a route domain has a separate configuration. You manage a dynamic routing configuration using the IMI shell (imish) within the BIG-IP Traffic Management Shell (tmsh).
The first step in configuring dynamic routing protocols on the BIG-IP system is to enable one or more routing protocols, as well as the optional the Bidirectional Forwarding Detection (BFD) network protocol. A protocol is enabled when at least one instance of the protocol is enabled on a route domain.
Perform this task to disable an instance of a routing or network protocol that is currently associated with a route domain other than route domain0.
After disabling a dynamic routing protocol for a route domain, the BIG-IP system stops the daemon of the specified protocol, resulting in these effects:
bgpd is running 
Route Health Injection (RHI) is the system process of advertising the availability of virtual addresses to other routers on the network. You can configure two aspects of RHI: route advertisement and route redistribution.
Route advertisement is the function that the BIG-IP® system performs when advertising a route for a virtual address to the Traffic Management Microkernel (TMM) routing table. You must configure route advertisement to ensure that the dynamic routing protocols propagate this route to other routers on the network.
When configuring route advertisement for a virtual address, you can specify the particular condition under which you want the BIG-IP system to advertise the address. The available conditions that you can choose from, and their descriptions, are:
After you specify the desired behavior of the system with respect to route advertisement, the tmrouted daemon attempts to comply. The daemon only succeeds in advertising the route for the virtual address when the relevant virtual servers, pool, and pool members collectively report their status in specific combinations.
The tmrouted daemon within the BIG-IP® system considers a virtual IP address to be in an UP state when any one of the following conditions are true:
This table shows the ways that Local Traffic Manager™ (LTM®) object status affects whether the BIG-IP® system advertises a route to a virtual address. In the table, the colors represent object status shown on the Local Traffic screens within the BIG-IP Configuration utility. The table also summarizes the collective LTM object status that determines route advertisement.
|LTM object status|
|Route advertised?||Pool member||Pool||Virtual server||Virtual address||Status summary|
|Yes||Pool members are monitored and UP. The virtual address is UP.|
|Yes||Pool or pool members are unmonitored. The virtual address is enabled.|
|Yes||Pool members are disabled. Other objects are enabled.|
|Yes||Virtual server is disabled. Virtual address is enabled.|
|Yes||N/A||The pool has no members. The virtual address is enabled.|
|Yes||N/A||N/A||Virtual server has no pool assigned.|
|No||Pool members are monitored and DOWN.|
|No||Virtual server and virtual address are disabled.|
|No||Virtual address is disabled. Other objects are enabled.|
The BIG-IP® Configuration utility displays various colored icons to report the status of virtual servers, virtual addresses, pools, and pool members.
Perform this task to specify the criterion that the BIG-IP system uses to advertise routes for virtual addresses. You must perform this task if you want the dynamic routing protocols to propagate this route to other routers on the network.
After performing this task, you should see the advertised routes for virtual addresses. For example, advertised routes for virtual addresses 10.1.51.80/32 and 10.2.51.81/32 appear as follows:
K 10.1.51.80/32 is directly connected, tmm0 K 10.1.51.81/32 is directly connected, tmm0
The /32 netmask indicates that the IP addresses pertain to individual hosts, and the tmm0 indicator shows that protocols on other routers have learned these routes from the Traffic Management Microkernel (TMM).
Perform this task to delay the withdrawal of RHI routes when operation status changes. Delaying route withdrawal prevents short route flaps that might occur due to both the short period during failover when both devices are in a standby state, and the periodic housekeeping processes in routing protocol daemons (specifically bgpd).
You can explicitly configure each dynamic routing protocol to redistribute routes for advertised virtual addresses, to ensure that other routers on the network learn these routes. For purposes of redistribution, the dynamic routing protocols consider any route generated through Route Health Injection (RHI) to be a host route.
This example shows an entry in the OSPF configuration. When you add this statement to the OSPF configuration, the BIG-IP system redistributes the route for the virtual address.
router ospf redistribute kernel
You can optionally specify a route-map reference that specifies the route map to use for filtering routes prior to redistribution. For example:
redistribute kernel route-map external-out
Route maps provide an extremely flexible mechanism for fine-tuning redistribution of routes using the dynamic routing protocols.
The BIG-IP system advertises all self IP addresses, including floating self IP addresses, to the dynamic routing protocols. The protocols store floating addresses so that the protocols can prefer a floating address as the advertised next hop. This applies only to protocols that allow explicit next-hop advertisement.
When you are using BGP4 and IPv6 addressing, you can advertise one or two next-hop addresses for each route. The BIG-IP system selects the addresses to advertise based on several factors.
For BGP-4 only, you can choose from several combinations of configuration parameters to control the selection of next-hop IPv6 addresses.
|Link-local autoconf. (LL-A)||Link-local (LL)||Link-local floating (LL-F)||Global (G)||Global floating (G-F)||EBGP multihop||Advertised nexthop addresses|
The dynamic routing protocols view Traffic Management Microkernel (TMM) static routes as kernel routes. (TMM static routes are routes that you configure using tmsh or the BIG-IP Configuration utility.) Because TMM static routes are viewed as kernel routes, a TMM static route has a higher precedence than a dynamic route (with an identical destination).
Management routes and addresses are not visible to the dynamic routing protocols and cannot be advertised. Routes to the networks reachable through the management interface can be learned by dynamic routing protocols if they are reachable through a VLAN, VLAN group, or tunnel.
If the BIG-IP system that you are configuring for dynamic routing is part of a redundant system configuration, you should consider these factors:
For the BGP, RIP, RIPng, and IS-IS protocols, you no longer need to specifically configure these protocols to function in active-standby configurations. Each member of the device group automatically advertises the first floating self IP address of the same IP subnet as the next hop for all advertised routes. This applies to both IPv4 and IPv6 addresses.
Advertising a next-hop address that is always serviced by an active device guarantees that all traffic that follows routes advertised by any device in the redundant pair is forwarded based on the active LTM® configuration.
For OSPF protocols, the BIG-IP system ensures that standby device group members are the least preferred next-hop routers. The system does this by automatically changing the runtime state as follows:
|Protocol Name||Runtime state change|
|OSPFv2||The OSPF interface cost is increased on all interfaces to the maximum value (65535) when the status of the device is Standby. Also, all external type 2 Link State Advertisements (LSAs) are aged out.|
|OSPFv3||The OSPF interface cost is increased on all interfaces to the maximum value.|
If you have a VIPRION® system, it is helpful to understand how the cluster environment affects the dynamic routing functionality.
On a VIPRION® system, the dynamic routing system behaves as if the cluster were a single router. This means that a cluster always appears as a single router to any peer routers, regardless of the dynamic routing protocol being used.
From a management perspective, the VIPRION system is designed to appear as if you are configuring and managing the routing configuration on a single appliance. When you use the cluster IP address to configure the dynamic routing protocols, you transparently configure the primary blade in the cluster. The cluster synchronization process ensures that those configuration changes are automatically propagated to the other blades in the cluster.
The dynamic routing system takes advantage of the redundancy provided by the cluster environment of a VIPRION® chassis, for the purpose of providing redundancy for the dynamic routing control plane. Two key aspects of dynamic routing control plane redundancy are the VIPRION cluster’s appearance to the routing modules as a single router, and the operational modes of the enabled dynamic routing protocols.
Enabled dynamic routing protocols run on every blade in a cluster in one of these operational modes: MASTER, STANDBY, or SLAVE.
This table shows the operational modes for primary and secondary blades, on both the active cluster and the standby cluster.
|Blade Type||Active Cluster||Standby Cluster||Notes|
|Primary||MASTER mode||STANDBY mode||The dynamic routing protocols:
|Secondary||SLAVE mode||SLAVE mode||The dynamic routing protocols:
In MASTER and STANDBY modes, all routes learned by way of dynamic routing protocols on the primary blade are (in real-time) propagated to all secondary blades. The difference between MASTER and STANDBY mode is in the parameters of advertised routes, with the goal to always make the active unit the preferred next hop for all advertised routes.
The transition from SLAVE to MASTER or STANDBY mode takes advantage of standard dynamic routing protocol graceful restart functionality.
Perform this task to display the current operational mode (MASTER, STANDBY, or SLAVE) of a blade.
With the graceful restart function, the dynamic routing protocol control plane moves from one blade to another without disruption to traffic. Graceful restart is enabled for most supported protocols and address families by default.
To operate successfully, the graceful restart function must be supported and enabled on all peer routers with which the VIPRION® system exchanges routing information. If one or more peer routers does not support graceful restart for one or more enabled dynamic routing protocols, a change in the primary blade causes full dynamic routing reconvergence, and probably traffic disruption. The traffic disruption is caused primarily by peer routers discarding routes advertised by the VIPRION system.
The BIG-IP system always preserves complete forwarding information (TMM and host route tables) on VIPRION systems during primary blade changes, regardless of support for graceful restart on peer routers.
The BIG-IP system automatically copies the startup configuration to all secondary blades and loads the new configuration when the running configuration is saved on the primary blade.
You can display information about the runtime state of both the primary and secondary blades. However, some information displayed on secondary blades might differ from the information on the primary blade. For troubleshooting, you should use the information displayed on the primary blade only, because only the primary blade both actively participates in dynamic routing communication and controls route tables on all blades.
Dynamic route propagation depends on a BIG-IP® system daemon named tmrouted. The BIG-IP system starts the tmrouted daemon when you enable the first dynamic routing protocol. and restarts the daemon whenever the BIG-IP system restarts.
In the rare case when you need to manage the tmrouted daemon due to a system issue, you can perform a number of different tasks to troubleshoot and solve the problem.
You perform this task to stop an instance of the tmrouted daemon.
You perform this task to restart an instance of the tmrouted daemon. Whenever the BIG-IP system reboots for any reason, the BIG-IP system automatically starts an instance of tmrouted for each instance of an enabled dynamic routing protocol.
For each dynamic routing protocol, the BIG-IP system logs messages to a file that pertains to the route domain in which the protocol is running. An example of the path name to a dynamic routing log file is /var/log/zebos/rd1/zebos.log file, where rd1 is the route domain of the protocol instance.
The system logs additional messages to the files /var/log/daemon.log and /var/log/ltm. The system logs protocol daemon information for protocol-specific issues, and logs nsm and imi daemon information for core daemon-related issues.
If a core dynamic routing daemon exits, the system logs an error message similar to the following to the /var/log/daemon.log file:
Mar 5 22:43:01 mybigip LOGIN: Re-starting tmrouted
In addition, the BIG-IP system logs error messages similar to the following to the /var/log/ltm file:
mcpd: 01070410:5: Removed subscription with subscriber id bgpd mcpd: 01070533:3: evWrite finished with no byte sent to connection 0xa56f9d0 (user Unknown) - connection deleted
Perform this task to create a log file for debugging. With a debug log file, you can more effectively troubleshoot any issues with a dynamic routing protocol.