Applies To:

Show Versions Show Versions

Manual Chapter: Configuring nPath Routing
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Introducing nPath routing
With the nPath routing configuration, you can route outgoing server traffic around the BIG-IP® system directly to an outbound router. This method of traffic management increases outbound throughput because packets do not need to be transmitted to the BIG-IP system for translation and forwarding to the next hop. Figure 2.1 shows an nPath configuration.
Note: The type of virtual server that processes the incoming traffic must be a transparent, non-translating type of virtual server.
In bypassing the BIG-IP system on the return path, nPath routing departs significantly from a typical load-balancing configuration. In a typical load-balancing configuration, the destination address of the incoming packet is translated from that of the virtual server to that of the server being load balanced to, which then becomes the source address of the returning packet. A default route set to the BIG-IP system then sees to it that packets returning to the originating client return through the BIG-IP system, which translates the source address back to that of the virtual server. The nPath configuration differs from the typical load-balancing configuration, as you can see in the following section.
Note: Do not attempt to use nPath routing for Layer 7 traffic. Certain traffic features do not work properly if Layer 7 traffic bypasses the BIG-IP system on the return path. An example of such a feature is HTTP response compression.
The nPath routing configuration differs from the typical BIG-IP load balancing configuration in the following ways:
The default route on the content servers must be set to the routers internal address (10.1.1.1 in Figure 2.1) rather than to the BIG-IP systems floating self-IP address (10.1.1.10). This causes the return packet to bypass the BIG-IP system.
If you plan to use an nPath configuration for TCP traffic, you must create a Fast L4 profile with the following custom settings:
Enable the Loose Close setting. When you enable the Loose Close setting, the TCP protocol flow expires more quickly, once a TCP FIN packet is seen. (A FIN packet indicates the tearing down of a previous connection.)
Set the TCP Close Timeout setting to the same value as the profile idle timeout if you expect half closes. If not, you can set this value to 5 seconds.
Because address translation and port translation have been turned off, when the incoming packet arrives at the pool member it is load balanced to the virtual server address (176.16.1.1 in Figure 2.1), not to the address of the server. For the server to respond to that address, that address must be configured on the loopback interface of the server and configured for use with the server software.
Ensure that the bigdb configuration key connection.autolasthop is enabled. Alternatively, on each content server, you can add a return route to the client.
For more information about these tasks, click the Help tab in the Configuration utility, or see the Configuration Guide for BIG-IP® Local Traffic Manager.
Note: You perform the tasks contained in this guide using the Configuration utility; however, the procedures do not include the step of logging on to the Configuration utility. Before you begin the tasks, log on to the Configuration utility.
1.
On the Main tab of the navigation pane, expand Local Traffic, and click Profiles.
The HTTP Profiles screen opens.
2.
From the Protocol menu, choose Fast L4.
The Fast L4 Profiles screen opens.
3.
To create a custom profile, click Create.
The New Fast L4 Profile screen opens.
Note: If the Create button is unavailable, this indicates that your user role does not grant you permission to create a Fast L4 profile.
a)
In the Name box, type a name for the profile.
b)
Check the Loose Close box.
c)
Set the TCP Idle Timeout setting according to the type of traffic the virtual server is going to handle. For additional information about setting this timeout, see Setting timers for nPath configurations.
5.
Click Finished.
1.
On the Main tab of the navigation pane, expand Local Traffic, and click Pools.
The Pools screen opens.
2.
To create a new pool, click Create.
The New Pool screen opens.
Note: If the Create button is unavailable, this indicates that your user role does not grant you permission to create a pool.
4.
Click Finished.
After you create a server pool, you need to create a virtual server that references the custom Fast L4 profile and pool you created in the previous two tasks.
1.
On the Main tab of the navigation pane, expand Local Traffic, and click Virtual Servers.
The Virtual Servers screen opens.
2.
To create a new virtual server, click Create.
The New Virtual Server screen opens.
Note: If the Create button is unavailable, this indicates that your user role does not grant you permission to create a virtual server.
4.
For the Type setting, select Performance (Layer 4).
a)
For Protocol, select either UDP, TCP, or *All Protocols from the list.
b)
For Protocol Profile (Client), select the name of the custom Fast L4 profile that you created.
c)
Clear the Address Translation check box to disable address translation.
d)
Clear the Port Translation check box to disable port translation.
6.
Click Finished.
You must place the IP address of the virtual server (176.16.1.1 in Figure 2.1 on page 2-1) on the loopback interface of each server. Most UNIX variants have a loopback interface named lo0. Microsoft® Windows® has an MS Loopback interface in its list of network adaptors. For some versions of Windows, you must install the loopback interface using the installation CD. Consult your server operating system documentation for information about configuring an IP address on the loopback interface. The loopback interface is ideal for the nPath configuration because it does not participate in the ARP protocol.
For inbound traffic, you must define a route through the BIG-IP system self IP address to the virtual server. In the example, this route is 176.16.1.1, with the external self IP address 10.1.1.10 as the gateway.
To ensure that npath routing works correctly, you must ensure that the bigdb configuration key connection.autolasthop is set to enable. This is relevant for both IPv4 and IPv6 addressing formats.
When you create an nPath configuration, the BIG-IP system sees only client requests. Therefore, the timer for the connection timeout is only reset when clients transmit. In general, this means the timeout for an nPath connection should be at least twice as long as for a comparable connection where BIG-IP system sees both client requests and node responses. Following are descriptions of scenarios for setting the timers for UDP and TCP traffic.
When you configure nPath for UDP traffic, the BIG-IP system tracks packets sent between the same source and destination address to the same destination port as a connection. This is necessary to ensure that client requests that are part of a session always go to the same server. Therefore, a UDP connection is really a form of persistence, since UDP is a connectionless protocol. To calculate the timeout for UDP, estimate the maximum amount of time that a server transmits UDP packets before a packet is sent by the client. In some cases, the server might transmit hundreds of packets over several minutes before ending the session or waiting for a client response.
When you configure nPath for TCP traffic, the BIG-IP system sees only the client side of the connection. For example, in the TCP three-way handshake, the BIG-IP system sees the SYN from the client to the server, and does not see the SYN acknowledgement from the server to the client, and does see the acknowledgement of the acknowledgement from the client to the server. The timeout for the connection should match the combined TCP retransmission timeout (RTO) of the client and the node as closely as possible to ensure that all connections are successful. The maximum initial RTO observed on most UNIX and Windows systems is approximately 25 seconds. Therefore, a timeout of 51 seconds should adequately cover the worst case. Once a TCP session is established, an adaptive timeout is used. In most cases, this results in a faster timeout on the client and node. Only if your clients are on slow, lossy networks should you ever need a higher TCP timeout for established connections.
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)