Manual Chapter : BIG-IP Administrator guide v3.2: Using Advanced Network Configurations

Applies To:

Show Versions Show Versions

BIG-IP versions 1.x - 4.x

  • 3.2 PTF-01, 3.2.3 PTF-01, 3.2.3, 3.2.0
Manual Chapter


9

Using Advanced Network Configurations



Introducing advanced network configurations

In addition to the basic setup features available on the BIG-IP Controller, a number of special network configurations can be used to optimize your network. This chapter describes a number of special network configurations possible with the BIG-IP Controller. These configurations are optional, and may not be required in your implementation of the BIG-IP Controller. The following topics are described in this chapter:

  • nPath routing
  • Per-connection routing
  • ISP load balancing
  • VPN load balancing
  • VPN and router load balancing
  • One IP network topology with one interface
  • One IP network topology with two interfaces
  • 802.1q VLAN trunk mode

nPath routing

nPath routing allows you to route outgoing server traffic around the BIG-IP Controller directly to an outbound router. This method of traffic management increases outbound throughput because packets do not need to be transmitted to the BIG-IP Controller for translation and forwarding to the next hop.

To use nPath routing, you must configure the BIG-IP Controller so that it does not translate the IP address or port of incoming packets. This is important because packets must not be translated when they are outbound to the router. To avoid translation of incoming, or destination packets, you must define virtual servers with address translation turned off.

The following tasks are required to configure the BIG-IP Controller to use nPath routing:

  • Define a virtual server
  • Turn address translation off for the virtual server
  • Set a route on your routers to the virtual server with the BIG-IPController as the gateway
  • Set the idle connection time-out value to remove stale connections
  • Configure the servers

Defining a virtual server with address translation disabled

You can disable address translation on any virtual server. Turning off address translation is necessary for nPath routing. The following two procedures describe how to create a virtual server in the Configuration utility and then how to turn address translation off for the virtual server.

To define a standard virtual server mapping in the Configuration utility

  1. In the navigation pane, click Virtual Servers.
  2. On the toolbar, click Add Virtual Server.
    The Add Virtual Server screen opens.
  3. In the Address box, enter the virtual server's IP address or host name.
  4. In the Netmask box, type an optional netmask. If you leave this setting blank, the BIG-IP Controller uses a default netmask based on the IP address you entered for the virtual server. Use the default netmask unless your configuration requires a different netmask.
  5. In the Broadcast box, type the broadcast address for this virtual server. If you leave this box blank, the BIG-IP Controller generates a default broadcast address based on the IP address and netmask of this virtual server.
  6. In the Port box, either type a port number, or select a service name from the drop-down list.
  7. For Interface, select the external (destination processing) interface on which you want to create the virtual server. Select default to allow the Configuration utility to select the interface based on the network address of the virtual server.
  8. In Resources, click the Node List button.
  9. In the Node Address box, type the IP address or host name of the first node to which the virtual server maps. If you have already defined a node address, you can choose it from the list.
  10. In the Node Port box, type the node port number, or select the service name from the drop-down list. If you have already defined a node port, you can choose it from the list.
  11. Click the add button (>>) to add the node the Current Members list for the virtual server.
  12. To add additional nodes to the virtual server mapping, type in a Node Address, Node Port, and click the add button (>>).
  13. To remove nodes from the virtual server mapping, click the node listed in the Current Members list and click the remove button (<<).
  14. After you have added or removed nodes from the Current Members list, click the Add button to save the virtual server.

To configure address translation for virtual servers in the Configuration utility

After you create a virtual server, you must turn address translation for the virtual server off.

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. In the virtual server list, click the virtual server for which you want to set up a transparent virtual server.
    The properties screen for the virtual server you clicked opens.
  3. In the Enable Translation options, clear the Address check box. This turns address translation off for the virtual server.
  4. Click the Apply button.

To define a virtual server mapping on the command line

Enter the bigpipe vip command as shown below to create the virtual server mapping. Note that you must turn off address translation for the virtual server you create.

  bigpipe vip <virtual IP>:<port> define <node IP>:<port> \
<node IP>:<port>... <node IP>:<port>

For example, the following command defines a virtual server that maps to three nodes. After you create the virtual server, you must turn off address translation. Use the following syntax to turn off address translation for the virtual server.

  bigpipe vip <vip>:<port> translate addr [ enable | disable ]

For example, use the following command to turn off address translation for the virtual server 11.1.1.1:80.

  bigpipe vip 11.1.1.1:80 translate addr disable

Setting the route through the BIG-IP Controller

A route must be defined through the BIG-IP Controller on the inbound router in your network configuration. This route should be the IP address (or alias) for the server, or servers, for which you want to set up nPath routing. The gateway should be the external shared IP alias of the BIG-IP Controller.

For information about how to define this route, please refer to the documentation provided with your router.

Setting the idle connection time-out

With nPath routing, the BIG-IP Controller cannot track the normal FIN/ACK sequences made by connections. Normally, the BIG-IP Controller shuts down closed connections based on this sequence. With nPath routing, the idle connection time-out must be configured to clean up closed connections. You need to set an appropriate idle connection time-out value so that valid connections are not disconnected, and closed connections are cleaned up in a reasonable time.

To set the idle connection time-out in the Configuration utility

  1. In the navigation pane, click Virtual Servers.
  2. In the Virtual Servers list, click the wildcard virtual server you created for nPath routing.
    The Virtual Server Properties screen opens.
  3. In the Port box, click the port.
    The Global Virtual Port Properties screen opens.
  4. In the Idle connection timeout TCP (seconds) box, type a time-out value for TCP connections. The recommended time-out setting is 10 seconds.
  5. In the Idle connection timeout UDP (seconds) box, type a time-out value for TCP connections. The recommended time-out setting is 10 seconds.
  6. Click Apply.

To set the idle connection time-out in the /etc/bigip.conf file

To set the idle connection time-out in the /etc/bigip.conf file, edit the following lines:

  treaper <port> <seconds>
  udp <port> <seconds>

The <seconds> value is the number of seconds a connection is allowed to remain idle before it is terminated. The <port> value is the port on the wildcard virtual server for which you are configuring out of path routing. The recommended value for the TCP and UDP connection timeouts is 10 seconds.

Configure the Servers

You must configure your servers differently to work in nPath mode. The IP address of the server (11.1.1.1 in Figure 9.1) must be placed on what is known as the loopback interface. A loopback interface is a software interface that is not associated with an actual network card. It allows a server to respond to an IP address without advertising it on a network. Most UNIX variants have a loopback interface named lo0. Microsoft Windows has an MS Loopback interface in its list of network adaptors. Consult your server operating system documentation for information about configuring an IP address on the loopback interface. The ideal loopback interface for the nPath configuration does not participate in the ARP protocol, because that would cause packets to be routed incorrectly.

Figure 9.1 An example nPath configuration with more than one virtual server

Per-connection routing

In situations where the BIG-IP Controller accepts connections for virtual servers from more than one router, you can send the return data back through the same device from which the connection originated. You can use this option to spread the load among outbound routers, or to ensure that connections go through the same device if that device is connection-oriented, such as a proxy, cache, firewall, or VPN router.

To set up last hop pools, define a list of routers as a pool from which the BIG-IP Controller receives packets. For information about creating a pool, see Defining pools, on page 3-4. The BIG-IP Controller determines the MAC address of the routers when you define the pool. Then the pool is associated with the virtual server by using the lasthop keyword to specify the last hop pool for the virtual server. When a packet arrives for the virtual server, the MAC address that the packet came from is matched up with the MAC address of the members of the last hop pool. The IP address of the matching member is stored with the connection as the last hop address. Then, connections coming from nodes and heading out towards the client are routed to the router with that last hop address, instead of to the default route.

Note: The packets must come from a member of the last hop pool or they are rejected.

To configure last hop pools for virtual servers from the command line

Use the following syntax to configure last hop pools for virtual servers:

  bigpipe vip <vip>:<port> lasthop pool <pool_name>

For example, you might use the following command:

  bigpipe vip 192.168.1.10:80 lasthop pool secure_routers

To configure last hop pools for virtual servers in the Configuration utility

Before you follow this procedure, you must configure at least one pool (for your routers or firewalls) and one virtual server that references the pool.

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. In the virtual server list, click the virtual server for which you want to set up a last hop pool.
    The properties screen for the virtual server you clicked opens.
  3. Click the Last Hop Pool list. Select the pool you created containing your routers.
  4. Click the Apply button.

ISP load balancing

You may find that as your network grows, or network traffic increases, you need to add an additional connection to the internet. You can use this configuration to add an additional internet connection to your existing network. Figure 9.2 shows an additional internet connection:

Figure 9.2 An example of an additional internet connection

Configuring interfaces for the additional internet connection

An additional internet connection requires special interface configuration. You must set interfaces on the redundant BIG-IP Controller system (1a and 1b in Figure 9.2) to process source and destination addresses. Note that in a basic controller configuration, one interface is configured as an internal interface (source processing), and the other interface is configured as an external interface (destination processing).

In order to load balance outbound connections, you must turn destination processing on for the internal interface, and source processing on for the external interface. Use the following command to turn destination processing on for the internal interface, in this example, the interface name is exp1:

  bigpipe interface exp1 dest enable

Use the following command to turn source processing on for the external interface, in this example, the interface name is exp0:

  bigpipe interface exp0 source enable

Configuring virtual servers for an additional internet connection

An additional internet connection requires you to create a pool for the inside interfaces of the routers. After you create the pool, you can create the virtual servers that reference these pools.

Defining the pools for the additional internet connection

First, define the pool router_insides for the internal addresses of the routers. Use the following command to create the pool router_insides:

  bigpipe pool router_insides { lb_mode rr member <router1>:0 member 
<router2>:0 }

Replace <router1> and <router2> with IP address of the respective routers. Also note that this example uses the global Round Robin load balancing mode.

Finally, define the pool server_pool for the nodes that handle the requests to virtual server 205.100.19.22:80:

  bigpipe pool server_pool { lb_mode rr member <server1>:80 member 
<server2>:80 member <server3>:80 }

Replace <server1>, <server2>, and <server3> with internal IP address of the respective server. Also note that this example uses the global round robin load balancing method.

Defining the virtual servers for the additional internet connection

After you define the pools for the inside IP addresses of the routers, you can define the virtual servers for the redundant BIG-IP Controllers 1a and 1b.

  • Configure the redundant controllers to load balance inbound connections
  • Configure the redundant controllers to load balance outbound connections

Inbound configuration

First, configure the controllers to handle inbound traffic.

Create the virtual server for controllers 1a and 1b with the following command:

  bigpipe vip 205.100.92.22:80 use pool server_pool

Configure the virtual server to use the last hop pool with the routers inside addresses:

  bigpipe vip 205.100.92.22:http lasthop pool 
router_insides

Outbound configuration

Next, configure controllers 1a and 1b to handle outbound traffic. Create a virtual server that sends traffic to the pool you created for the internal interfaces of the routers (router_insides). Use the following command to create the virtual server:

  bipipe vip 0.0.0.0:0 exp1 use pool router_insides

VPN load balancing

You can use the BIG-IP Controller to load balance virtual private network (VPN) routers used to connect two private networks. Since neither translation nor load balancing is required, you can combine a forwarding virtual server with a lasthop pool.

Figure 9.3 An example of a VPN load balancing configuration

Configuring interfaces for VPN load balancing

A VPN load balancing configuration requires special interface configuration. You must configure the interfaces on the redundant BIG-IP Controller system (1a and 1b, and 2a and 2b, in Figure 9.3) to process source and destination addresses. Note that in a basic controller configuration, one interface is configured as an internal interface (source processing), and the other interface is configured as an external interface (destination processing).

In order for the VPN load balancing to work, you must turn destination processing on for the internal interface, and source processing on for the external interface. Use the following command to turn destination processing on for the internal interface, in this example, the interface name is exp1:

  bigpipe interface exp1 dest enable

Use the following command to turn source processing on for the external interface, in this example, the interface name is exp0:

  bigpipe interface exp0 source enable

Configuring virtual servers for VPN load balancing

In the following examples only the configuration for the BIG-IP Controller on network 192.168.11 are shown (controllers 2a and 2b). The configuration for 192.168.13 is the same, only with different network numbers. Since VPNs are connection-oriented, you must set up a last hop pool for sending the return traffic back through the VPN that originated the traffic. After you create the pools, you can create the virtual servers that reference these pools.

Defining the pools for VPN load balancing

First, define the pool vpn_insides for the internal addresses of the VPN routers. Use the following command to create the pool vpn_insides:

  bigpipe pool vpn_insides { lb_mode rr member <vpn1>:22 member 
<vpn2>:22 member <vpn3>:22 }

Replace <vpn1>, <vpn2>, and <vpn3> with internal IP address of the respective routers. In this example the routers are service checked on port 22. Also note that this example uses the global round robin load balancing method.

Finally, define the pool server_pool for the nodes that handle the requests to virtual server 205.100.19.22:80:

  bigpipe pool server_pool { lb_mode rr member <server1>:80 member 
<server2>:80 member <server3>:80 }

Replace <server1>, <server2>, and <server3> with internal IP address of the respective server. Also note that this example uses the global round robin load balancing method.

Defining the virtual servers for VPN load balancing

After you define the pools for the inside IP addresses of the routers, you can define the virtual servers for the redundant BIG-IP Controllers 2a and 2b.

  • Configure the redundant controllers to load balance inbound connections
  • Configure the redundant controllers to load balance outbound connections

Inbound configuration

First, configure the controllers to handle inbound traffic from the remote network.

Create the virtual servers for controllers 2a and 2b with the following commands:

  bigpipe vip 192.168.13.1:0 exp1 forward
  bigpipe vip 192.168.13.2:0 exp1 forward
  bigpipe vip 192.168.13.3:0 exp1 forward

Configure the virtual servers to use the last hop pool with the inside VPN router addresses:

  bigpipe vip 192.168.13.1:0 lasthop pool vpn_insides
  bigpipe vip 192.168.13.2:0 lasthop pool vpn_insides
  bigpipe vip 192.168.13.3:0 lasthop pool vpn_insides

Outbound configuration

Next, configure controllers 2a and 2b to handle outbound traffic. Create a virtual server that sends traffic to the pool you created for the internal interfaces of the VPN routers (vpn_insides). Use the following commands to create virtual servers for connecting to the machines on the remote network:

  bipipe vip 192.168.11.1:0 exp1 use pool vpn_insides
  bipipe vip 192.168.11.1:0 translate addr disable
  bipipe vip 192.168.11.1:0 translate port disable
  bipipe vip 192.168.11.2:0 exp1 use pool vpn_insides
  bipipe vip 192.168.11.2:0 translate addr disable
  bipipe vip 192.168.11.2:0 translate port disable
  bipipe vip 192.168.11.3:0 exp1 use pool vpn_insides
  bipipe vip 192.168.11.3:0 translate addr disable
  bipipe vip 192.168.11.3:0 translate port disable

The addresses 192.168.11.1, 192.168.11.2, and 192.168.11.3 correspond to the IBM Compatible, Tower box, and Mac Classic on the remote network in Figure 9.3. Note that port translation has been turned off because the members in the vpn_insides pool were defined with port 22 for service checking. If port translation is not disabled, then all outbound connections would be translated to port 22.

VPN and router load balancing

You can use the transparent device load balancing feature in the BIG-IP Controller to connect two private networks as well as load balance internet connections through multiple routers. Figure 9.4 is an example of this network configuration.

Figure 9.4 An example of a VPN and multiple router load balancing configuration

Configuring interfaces for VPN load balancing

A VPN load balancing configuration requires special interface configuration. The interfaces on the redundant BIG-IP Controller system (1a and 1b, and 2a and 2b, in Figure 9.4) must be set to process source and destination addresses. Note that in a basic controller configuration, one interface is configured as an internal interface (source processing), and the other interface is configured as an external interface (destination processing).

In order for VPN load balancing to work, you must turn destination processing on for the internal interface, and source processing on for the external interface. Use the following command to turn destination processing on for the internal interface, in this example, the interface name is exp1:

  bigpipe interface exp1 dest enable

Use the following command to turn source processing on for the external interface, in this example, the interface name is exp0:

  bigpipe interface exp0 source enable

Configuring virtual servers for VPN and router load balancing

In the following examples, only the configuration for the BIG-IP Controller on network 192.168.11 are shown (controllers 2a and 2b). The configuration for 192.168.13 is the same, only with different network numbers. Since VPNs are connection-oriented, VPN and router load balancing requires you to create a pool for the inside interfaces on the VPNs and routers. After you create the pool, you can create the virtual servers that reference these pools.

Defining the pools for VPN load balancing

First, define the pool vpn_insides for the internal addresses of the VPN routers. Use the following command to create the pool vpn_insides:

  bigpipe pool vpn_insides { lb_mode rr member <vpn1>:0 member 
<vpn2>:0 member <vpn3>:0 }

Replace <vpn1>, <vpn2>, and <vpn3> with external IP address of the respective routers. Also note that this example uses the global round robin load balancing method.

Defining pools for the additional routers

Next, define the pool routers_insides for the internal addresses of the routers. Use the following command to create the pool routers_insides:

  bigpipe pool routers_insides { lb_mode rr member <router1>:0 member 
<router2>:0 }

Replace <router1> and <router2> with internal IP address of the respective routers. Also note that this example uses the global round robin load balancing method.

Defining a pool for the servers

Next, define the pool server_pool for the nodes that handle the requests to virtual server 205.100.19.22:80:

  bigpipe pool server_pool { lb_mode rr member <server1>:80 member 
<server2>:80 member <server3>:80 }

Replace <server1>, <server2>, and <server3> with IP address of the respective server. Also note that this example uses the global round robin load balancing method.

Defining a pool for all inbound traffic sources

Finally, define the pool inbound_sources for all machines that can originate traffic for the virtual server 205.100.19.22:80:

  bigpipe pool inbound_sources { lb_mode rr member <vpn1>:80 member 
<vpn2>:80 member <vpn3>:80 member <router1>:80 member
<router2>:80 member <server1>:80 member <server2>:80 member
<server3>:80 }

Replace <vpn1>, <vpn2>, and <vpn3> with internal IP address of the respective routers. Replace <server1>, <server2>, and <server3> with IP address of the respective server. Replace <router1> and <router2> with internal IP address of the respective routers. Also note that this example uses the global round robin load balancing method.

Defining the virtual servers for the additional internet connection

After you define the pools for the inside IP addresses of the routers, you can define the virtual servers for the redundant BIG-IP Controllers 2a and 2b.

  • Configure the redundant controllers to load balance inbound connections
  • Configure the redundant controllers to load balance outbound connections

Inbound configuration for the VPNs

First, configure the controllers to handle inbound traffic from the remote network.

Create the virtual server for controllers 2a and 2b with the following commands:

  bigpipe vip 192.168.13.1:0 exp1 forward
  bigpipe vip 192.168.13.2:0 exp1 forward
  bigpipe vip 192.168.13.3:0 exp1 forward

Configure the virtual server to use the last hop pool with the inside VPN router addresses:

  bigpipe vip 192.168.13.1:0 lasthop pool vpn_insides
  bigpipe vip 192.168.13.2:0 lasthop pool vpn_insides
  bigpipe vip 192.168.13.3:0 lasthop pool vpn_insides

Note that by using the last hop pool vpn_insides, only connections that originate from the remote network, through the VPNs, will be allowed to connect to the local 192.168.11 network.

Outbound configuration for the VPNs

Next, configure controllers 2a and 2b to handle outbound traffic. Create a virtual server that sends traffic to the pool you created for the internal interfaces of the VPN routers (vpn_insides). Use the following commands to create virtual servers for connecting to the machines on the remote network:

  bipipe vip 192.168.11.1:0 exp1 use pool vpn_insides
  bipipe vip 192.168.11.1:0 translate addr disable
  bipipe vip 192.168.11.2:0 exp1 use pool vpn_insides
  bipipe vip 192.168.11.2:0 translate addr disable
  bipipe vip 192.168.11.3:0 exp1 use pool vpn_insides
  bipipe vip 192.168.11.3:0 translate addr disable

The addresses 192.168.11.1, 192.168.11.2, and 192.168.11.3 correspond to the IBM Compatible, Tower box, and Mac Classic on the remote network in Figure 9.3, on page 9-12.

Inbound configuration for internet traffic

First, configure the controllers to handle inbound traffic.

Create the virtual server for controllers 1a and 1b with the following command:

  bigpipe vip 205.100.92.22:80 use pool server_pool

Configure the virtual server to use the last hop pool with the routers inside addresses:

  bigpipe vip 205.100.92.22:http lasthop pool inbound_sources

Note that by using the last hop pool inbound_sources, this virtual server will accept connections that originate from either the remote network via the VPNs, or from the internet via the routers.

Outbound configuration for internet traffic

Next, configure controllers 1a and 1b to handle outbound traffic. Create a virtual server that sends traffic to the pool you created for the internal interfaces of the routers (router_insides). Use the following command to create the virtual server:

  bipipe vip 0.0.0.0:0 exp1 use pool router_insides

SNAT and virtual servers combined

In some cases you may want to configure outbound transparent device load balancing and SNAT source translations. In this configuration, the BIG-IP Controller changes the source address of the clients to the external SNAT address. In this way, the actual IP addresses are not exposed to the internet. At the same time the BIG-IP Controller can load balance the same connection across multiple nodes. Therefore, both SNAT translation and virtual server load balancing can operate on the same connection in this configuration.

Figure 9.5 An example of a virtual server/SNAT combination

Configuring interfaces for the SNAT and virtual server combination

The SNAT and virtual server combination does not require additional interface configuration. However, in this configuration, the destination processing interface must be on the internal network and the source processing interface must be on the external network.

Defining a pool for the HTTP cache servers

Finally, define the pool cache_pool for the nodes that handle the requests to virtual server 0.0.0.0:0:

  bigpipe pool cache_pool { lb_mode rr member <HTTPcache1>:80 member 
<HTTPcache2>:80 }

Replace <HTTPcache1> and <HTTPcache2> with IP address of the respective HTTP cache server. Also note that this example uses the global round robin load balancing method.

Outbound configuration

Next, configure controllers 1a and 1b to handle outbound traffic. Create a virtual server that sends traffic to the pool you created for the internal interfaces of the HTTP cache servers (cache_pool). Use the following commands to create a virtual server for connecting to cache servers:

  bipipe vip 0.0.0.0:0 exp1 use pool cache_pool
  bipipe vip 0.0.0.0:0 translate port disable

Note that port translation has been turned off because the members in the vpn_insides pool were defined with port 80 for service checking. If port translation is not disabled, then all outbound connections would be translated to port 80.

After you create the virtual server, type the following SNAT command:

  bipipe snat map client1 client2 client3 to 205.100.19.23

Replace client1, client2, and client3 with the actual names of the clients in your configuration.

One IP network topology with one interface

The BIG-IP Controller can be used in a single interface configuration when there is only one network in the topology.

Figure 9.6 An example of a single interface topology

Configuring the interface in the single interface topology

A single IP network topology with a single interface requires special interface configuration. You must configure the single interface on the redundant BIG-IP Controller system (1a and 1b, in Figure 9.6) to process source and destination addresses. Note that in a basic controller configuration, one interface is configured as an internal interface (source processing), and the other interface is configured as an external interface (destination processing).

Use the following commands to turn source and destination processing on for the interface; in this example, the interface name is exp0:

  bigpipe interface exp0 source enable
  bigpipe interface exp0 dest enable

Defining a pool for the servers

First, define the pool server_pool for the nodes that handle the requests to virtual server 192.168.13.1:80

  bigpipe pool server_pool { lb_mode rr member <server1>:80 member 
<server2>:80 }

Replace <server1> and <server2> with IP address of the respective server. Also note that this example uses the global round robin load balancing method.

Virtual Server Configuration

Next, configure controllers 1a and 1b to load balance connections to the servers. Create a virtual server that sends traffic to the pool you created for the servers (server_pool). Use the following commands to create a virtual server for connecting to the servers:

  bipipe vip 192.168.13.1:80 use pool server_pool

Client SNAT Configuration

Finally, configure controllers 1a and 1b to handle connections originating from the client. A SNAT must be defined in order to change the source address on the packet to the SNAT external address, which is located on the BIG-IP Controller. If a SNAT were not defined, the server would return the packets directly to the client without giving the BIG-IP Controller the opportunity to translate the source address from the server address back to the virtual server. The client would not recognize the packet if the source address of the returning packet is the IP address of the real server because the client sent its packets to the IP address of the virtual server.

  bipipe snat map client1 to 192.168.13.99

Replace client1 with the actual name of the client in your configuration.

One IP network topology with two interfaces

The one IP network with two interfaces configuration is similar to the one IP network with one interface configuration, except that it uses two interfaces to optimize throughput.

Figure 9.7 An example of a single IP network with two interfaces topology

Configuring the interfaces in the single IP network with two interfaces topology

A single IP network with two interfaces topology requires special interface configuration. You must configure both interfaces on the redundant BIG-IP Controller system (1a and 1b, in Figure 9.7) to process source and destination addresses. Note that in a basic controller configuration, one interface is configured as an internal interface (source processing), and the other interface is configured as an external interface (destination processing).

In order for this configuration to work, you must turn destination processing on for the internal interface, and source processing on for the external interface. Use the following command to turn destination processing on for the internal interface, in this example, the interface name is exp1:

  bigpipe interface exp1 dest enable

Use the following command to turn source processing on for the external interface, in this example, the interface name is exp0:

  bigpipe interface exp0 source enable

Routing Issues

By setting up the IP addresses and interfaces properly, you can configure the BIG-IP Controller to receive all traffic through one interface and to send all traffic out the other interface. The key to optimizing the throughput in this configuration is routing.

In this example, the administrative IP addresses for the BIG-IP Controller are placed on exp1. This is setup when you first configure the BIG-IP Controller, or it can be changed anytime by editing the /etc/netstart file. Once the administrative address is configured, the BIG-IP Controller sets up a route to the IP network going through exp1. The exp0 interface should not be configured with an IP address on the same IP network, because that creates a routing conflict and the BIG-IP Controller would not know which interface the IP network is accessible through. Once the route is set up properly, all traffic sent from the BIG-IP Controller to that IP network goes through exp1.

In order to receive traffic through exp0, the virtual server and SNAT external address in this example are explicilty declared to reside on exp0. This causes the BIG-IP Controller to respond to ARP requests for those addresses from the exp0 interface. Virtual servers and SNAT addresses will not create a routing conflict for the IP network they are declared with. Only administrative or shared IP addresses create routes to the corresponding IP network through the interface that they are configured on. In other words, in this example, the BIG-IP Controller determines that the 192.168.13 network is on interface exp1 and it sends all traffic to those addresses through that interface.

Defining a pool for the servers

First, define the pool server_pool for the nodes that handle the requests to virtual server 192.168.13.1:80

  bigpipe pool server_pool { lb_mode rr member <server1>:80 member 
<server2>:80 }

Replace <server1> and <server2> with IP address of the respective server. Also note that this example uses the global round robin load balancing method.

Virtual Server Configuration

Next, configure controllers 1a and 1b to load balance connections to the servers. Create a virtual server that sends traffic to the pool you created for the servers (server_pool). Use the following commands to create a virtual server for connecting to the servers:

  bipipe vip 192.168.13.1:80 exp0 use pool server_pool

Client SNAT Configuration

Finally, configure controllers 1a and 1b to handle connections originating from the client. A SNAT must be defined in order to change the source address on the packet to the SNAT external address, which is located on the BIG-IP Controller. If a SNAT is not defined, the server returns the packets directly to the client without giving the BIG-IP Controller the opportunity to translate the source address from the server address back to the virtual server. The client would not recognize the packet if the source address of the returning packet is the IP address of the real server because the client sent its packets to the IP address of the virtual server.

  bipipe snat map client1 to 192.168.13.99 exp0

Replace client1with the actual name of the client in your configuration.

Setting up 802.1q VLAN trunk mode

The BIG-IP Controller supports VLANs based on the IEEE 802.1q Trunk mode on BIG-IP Controller internal interfaces. VLAN tags are not supported on the external interfaces. You can define a single VLAN tag for each IP address defined for each BIG-IP Controller internal interface. This includes node network addresses, administrative addresses, shared administrative aliases, and additional aliases.

Note: In order for 802.1q VLAN trunk mode to operate on a BIG-IP Controller interface, all IP addresses on that interface must have a VLAN tag.

In order to use VLAN tags, you must edit /etc/netstart. Additionally, if you plan to use VLAN tags on a redundant BIG-IP system, you must add VLAN tags to the shared IP aliases in BIG/db using the bigpipe ipalias command.

Adding VLAN tag definitions to /etc/netstart

You must specify the VLAN tag ID for the network at the time you define the network address for a particular internal interface. You can do this by extending the additional_xxx definition for the internal interface (where xxx is the interface name, such as exp0, exp1, or hmc0). For example, if you have an internal interface IP defined as:

  ipaddr_exp1="10.1.1.1"
  netmask_exp1="255.0.0.0"
  linkarg_exp1="media 100BaseTX,FDX"
  additional_exp1="broadcast 10.255.255.255"

To define a VLAN tag ID 12 for this network (10.0.0.0), extend the additional_exp1 definition in the following manner:

  additional_exp1="broadcast 10.255.255.255 vlan 12" 

Do this for each internal interface for which you want to define a VLAN tag ID.

Adding VLAN tag definitions to BIG/db

For a redundant configuration, theBIG/db database contains the shared IP addresses for the internal and external interfaces for the BIG-IP Controller. If you plan to use VLAN tags on a redundant BIG-IP system, you must add the shared IP addresses to this database. Use the following syntax to add VLAN tag definitions to BIG/db.

  bigpipe ipalias <ifname> <if address> netmask <ip mask> [ broadcast 
<ip address> ] [ unit <id> ] [ tag <vlan tag> ]

For example, using the previous example, this line is extended with the same VLAN tag defined for its primary address, in this case 12:

  bigpipe ipalias exp1 10.1.1.10 netmask 255.0.0.0 broadcast 
10.255.255.255 tag 12

Configuring multiple VLANs on one interface

In order to set up multiple VLANs on the same interface, you need to add a new IP address for the interface. The BIG-IP Controller only supports one VLAN ID per network.

For example, to support an additional network, 12.0.0.0, with a VLAN tag ID of 15 on the same interface, add the following line to your /etc/netstart file after the ifconfig command:

  /sbin/ifconfig exp1 add 12.1.1.1 netmask 255.0.0.0 media 
100BaseTX,FDX broadcast 12.255.255.255 vlan 15

Note that you must add a shared address to the BIG/db file with the bigpipe ipalias command in a redundant BIG-IP system:

   bigpipe ipalias exp1 12.1.1.1 netmask 255.0.0.0 broadcast 
12.255.255.255 tag 15

To enable or disable VLAN tags on the command line

Once you have added VLAN tags, you can use the bigpipe interface command to enable, disable, or show the current settings for the interface. To globally enable or disable the VLAN tags for an internal interface, use the following syntax:

  bigpipe interface <ifname> vlans [ enable | disable | show ]

For example, use the following command to enable VLAN tags on the interface exp1:

  bigpipe interface exp1 vlans enable

Using ifconfig to add another VLAN

You must use the ifconfig command to define multiple, different VLAN tagged networks on the same interface. For example, use the following syntax to add a new VLAN tagged network on the same interface:

  ifconfig exp1 add <address> netmask <mask> broadcast <address> vlan 
<tag>

Note that the BIG-IP Controller allows one VLAN tag per network. In a redundant configuration, you need to add a new shared address on the new network with the identical VLAN tag ID in the BIG/db database with the bigpipe ipalias command.

You can also use ifconfig to display VLAN information for the interface exp1 with the following command:

  ifconfig exp1

Using netstat to view VLAN tags

You can also use the netstat utility to display VLAN tag information with the route table for the BIG-IP Controller. Use the following syntax to display VLAN tag information with netstat:

  netstat -nrT

Warning: 802.1q VLAN tags are currently supported only on Intel EtherExpressPro NICs and SysKonnect SX 9843 and SX 9844 NICs.

Disabling and enabling VLAN tags using the Configuration utility

You can use the Configuration utility to enable or disable VLAN tags once they are configured on the BIG-IP Controller.

  1. In the navigation pane, select NICs.
    The Network Interface Cards screen opens.
  2. In the Network Interface Cards list, select the internal NIC for which you want to enable VLAN tags.
    The Network Interface Card Properties screen opens.
  3. In the Network Interface Card Properties screen, navigate to the Enable VLANs check box.
    Click the Enable VLANs check box to enable the VLAN tags for the interface. Clear the check box to disable VLAN tags on the interface.
  4. Click the Apply button.

Note: You can only enable or disable VLAN tags in the Configuration utility. VLAN tags must be configured by adding VLAN tag values to the /etc/netstart file (and the BIG/db with the bigpipe ipalias command for redundant configurations). The Configuration utility can only enable or disable VLAN tags that have been configured in those files.