Manual Chapter : Overview of TMOS Routing

Applies To:

Show Versions Show Versions

BIG-IP AAM

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0

BIG-IP APM

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0

BIG-IP Link Controller

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0

BIG-IP Analytics

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0

BIG-IP LTM

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0

BIG-IP AFM

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0

BIG-IP PEM

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0

BIG-IP DNS

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0

BIG-IP ASM

  • 13.1.5, 13.1.4, 13.1.3, 13.1.1, 13.1.0
Manual Chapter

Overview of routing administration in TMOS

As a BIG-IP ®system administrator, you typically manage routing on the system by configuring these BIG-IP system features.

Table 1. BIG-IP system features for route configuration
BIG-IP system feature Benefit
Interfaces For the physical interfaces on the BIG-IP system, you can configure properties such as flow control and sFlow polling intervals. You can also configure the Link Layer Discovery Protocol (LLDP), globally for all interfaces and on a per-interface basis.
Trunks A trunk is a logical grouping of interfaces on the BIG-IP system. When you create a trunk, this logical group of interfaces functions as a single interface. The BIG-IP system uses a trunk to distribute traffic across multiple links, in a process known as link aggregation.
VLANs You create VLANs for the external and internal BIG-IP networks, as well as for high-availability communications in a BIG-IP device clustering configuration. The BIG-IP system supports VLANs associated with both tagged and untagged interfaces.
Virtual and self IP addresses You can create two kinds of IP addresses locally on the BIG-IP system. A virtual IP address is the address associated with a virtual server. A self IP address is an IP address on the BIG-IP system that you associate with a VLAN or VLAN group, to access hosts in that VLAN or VLAN group. Whenever you create virtual IP addresses and self IP addresses on the BIG-IP system, the system automatically adds routes to the system that pertain to those addresses, as directly-connected routes.
DHCP support You can configure the BIG-IP system to function as a DHCP relay or renewal agent. You can also force the renewal of the DHCP lease for the BIG-IP system management port.
Packet filtering Using packet filters, you can specify whether a BIG-IP system interface should accept or reject certain packets based on criteria such as source or destination IP address. Packet filters enforce an access policy on incoming traffic.
IP address translation You can configure network address translation (NATs) and source network address translation (SNATs) on the BIG-IP system. Creating a SNAT for a virtual server is a common way to ensure that pool members return responses to the client through the BIG-IP system.
Route domains You create route domains to segment traffic associated with different applications and to allow devices to have duplicate IP addresses within the same network.
Static routes For destination IP addresses that are not on the directly-connected network, you can explicitly add static routes. You can add both management (administrative) and TMM static routes to the BIG-IP system.
Dynamic routing You can configure the advanced routing modules (a set of dynamic routing protocols and core daemons) to ensure that the BIG-IP system can learn about routes from other routers and advertise BIG-IP system routes. These advertised routes can include BIG-IP virtual addresses.
Spanning Tree Protocol (STP) You can configure any of the Spanning Tree protocols to block redundant paths on a network, thus preventing bridging loops.
The ARP cache You can manage static and dynamic entries in the ARP cache to resolve IP addresses into MAC addresses.
WCCPv2 support WCCPv2 is a content-routing protocol developed by Cisco® Systems. It provides a mechanism to redirect traffic flows in real time. The primary purpose of the interaction between WCCPv2-enabled routers and a BIG-IP® system is to establish and maintain the transparent redirection of selected types of traffic flowing through those routers.

About BIG-IP system routing tables

The BIG-IP system contains two sets of routing tables:

  • The Linux routing tables, for routing administrative traffic through the management interface
  • A special TMM routing table, for routing application and administrative traffic through the TMM interfaces

As a BIG-IP administrator, you configure the system so that the BIG-IP system can use these routing tables to route both management and application traffic successfully.

About BIG-IP management routes and TMM routes

The BIG-IP system maintains two kinds of routes:

Management routes
Management routes are routes that the BIG-IP system uses to forward traffic through the special management interface. The BIG-IP system stores management routes in the Linux (that is, kernel) routing table.
TMM routes
TMM routes are routes that the BIG-IP system uses to forward traffic through the Traffic Management Microkernel (TMM) interfaces instead of through the management interface. The BIG-IP system stores TMM routes in both the TMM and kernel routing tables.

About traffic load balancing across TMM instances

On eDAG-enabled hardware platforms, you can configure the BIG-IP® system to scatter stateless traffic in round-robin fashion across TMM instances. This feature is particularly beneficial for networks with heavy Domain Name Server (DNS) or Session Initiation Protocol (SIP) traffic, as well as for the management of Distributed Denial of Service (DDoS) attacks.

You can configure this feature in one of two modes:

Global
Across TMM instances across multiples blades or HSBs on the system. You configure this mode by setting a global value using the Traffic Management Shell (tmsh). This is the default mode.
Local
Per individual high-speed bridge (HSB). In this case, the system load balances the traffic across TMM instances within a single blade or HSB but not across multiple blades or HSBs. You configure this mode when you create or modify a VLAN on the BIG-IP system.

Load balancing traffic across multiple blades or HSBs

Before performing this task, confirm that the BIG-IP®software is running on an eDAG-enabled hardware platform.

You can configure whether the BIG-IP system load balances traffic across TMM instances between blades/high-speed bridges (HSBs) or only across TMM instances that are local to a given HSB.

  1. Open the TMOS Shell (tmsh).
    tmsh
  2. Enable the load balancing of traffic across all TMMs between blades or HSBs on the system.
    modify net dag-globals round-robin-mode global
    After you use this command, the BIG-IP system load balances traffic across all TMM instances on the system, regardless of the associated blade or HSB.
  3. Disable the load balancing of traffic across TMMs between all blades or HSBs on the system.
    modify net dag-globals round-robin-mode local
    After you use this command, the BIG-IP system load balances traffic across any TMM instances that are local to a specific blade or HSB, but only if you have selected the Round Robin DAG setting on the relevant VLAN.

Viewing routes on the BIG-IP system

You can use the tmsh utility to view different kinds of routes on the BIG-IP system.
  1. Open a console window, or an SSH session using the management port, on the BIG-IP system.
  2. Use your user credentials to log in to the system.
  3. Perform one of these actions at the command prompt:
    • To view all routes on the system, type: tmsh show /net route
    • To view all configured static routes on the system, type: tmsh list /net route
You are now able to view BIG-IP system routes.