Applies To:

Show Versions Show Versions

sol10430: Causes of uneven traffic distribution across BIG-IP pool members
InformationalInformational

Original Publication Date: 08/25/2009
Updated Date: 04/02/2014

The BIG-IP system is designed to distribute client requests to load balancing pools composed of multiple servers. Factors such as the BIG-IP configuration, server performance, and network-related issues determine the pool member to which the BIG-IP system sends the connection, and whether connections are evenly distributed across BIG-IP pool members. For example, a virtual server referencing a Round Robin pool will distribute connections across BIG-IP pool members evenly over time. However, if the same virtual server also references a BIG-IP configuration object that affects traffic distribution such as a OneConnect profile or an iRule, connections may not be evenly distributed, as expected.

Note: You can view pool member statistics in the Configuration utility by navigating to Overview > Statistics > and selecting Pools from the Statistics Type menu.

Factors affecting traffic distribution across pool members are discussed below.

Load balancing methods

The load balancing algorithm is the primary mechanism that determines how connections are distributed across pool members. You can define static or dynamic load balancing methods for a pool. Certain load balancing methods are designed to distribute requests evenly across pool members, and other load balancing methods are designed to favor higher performing servers, possibly resulting in uneven traffic distribution across pool members.

Static load balancing methods

Certain static load balancing methods are designed to distribute traffic evenly across pool members. For example, the Round Robin load balancing method causes the BIG-IP system to send each incoming request to the next available member of the pool, thereby distributing requests evenly across the servers in the pool. However, when a static load balancing method such as Round Robin is used along with a BIG-IP configuration object that affects load distribution, such as a OneConnect profile or a persistence profile, traffic may not be evenly distributed across BIG-IP pool members as expected.

Dynamic load balancing methods

Dynamic load balancing methods typically favor higher performing servers, and may result in uneven traffic distribution across pool members. Dynamic load balancing methods are designed to work with servers that differ in processing speed and memory.  For example, when a dynamic load balancing method such as the Observed method is defined for a pool, higher performing servers will process more connections over time than lower performing servers. As a result, connection statistics for the higher performing servers will exceed those for lower performing severs.

Note: For more information about changes in dynamic load balancing methods, refer to SOL6406: Overview of Least Connections, Fastest, Observed, and Predictive load balancing modes.

Persistence

A persistence profile allows a returning client to connect directly to the server to which it last connected. In some cases, assigning a persistence profile to a virtual server can create the appearance that the BIG-IP system is incorrectly distributing more requests to a particular server. However, when you enable a persistence profile for a virtual server, a returning client is allowed to bypass the load balancing method and connect directly to the pool member. As a result, the traffic load across pool members may be uneven, especially if the persistence profile is configured with a high time-out value.

Note: For more information about how persistence bypasses load balancing, refer to SOL8968: Enabling persistence for a virtual server allows returning clients to bypass load balancing.

Source Address Affinity and NATs

Source Address Affinity is a commonly used persistence profile that works by directing requests to the same pool member based on the source IP address of a datagram. If you configure a Source Address Affinity profile for a virtual server, and some of the virtual server connections originate from organizations that are connected to the Internet through a NAT device, those connections may be persisted to the same pool member, creating the appearance that the BIG-IP system is incorrectly distributing more requests to a particular server. You can mitigate this behavior by configuring an alternate persistence profile for the virtual server, such as cookie persistence.

Using Source Address Affinity as a fallback persistence method may also create the appearance that the BIG-IP system incorrectly distributes more requests to a particular server. The BIG-IP system uses the fallback persistence method when it cannot use the specified primary persistence profile. For example, if cookie persistence is configured with source address affinity as a fallback, but a client does not send a persistence cookie, the fallback persistence method is used instead.

Note: If you have configured a Source Address Affinity profile with the Map Proxies option for a virtual server, all AOL clients persist to the same pool member. For more information, refer to SOL7004: Using Source Address Affinity for AOL persistence connections.

Persistence through iRules

You can also configure persistence by writing an iRule. Configuring session persistence through an iRule allows you to enable a persistence type for particular requests. iRule persistence can also create the appearance that the BIG-IP system is incorrectly distributing more requests to a particular server.

OneConnect

The OneConnect feature works by allowing the BIG-IP system to minimize the number of server-side TCP connections. OneConnect does this by distributing connections on a per-request basis and making existing connections available for reuse by other clients. When a OneConnect profile is enabled for a virtual server, the BIG-IP system is able to reuse open and idle server connections. When more open and idle server connections are available, the BIG-IP system creates fewer new connections. When the network contains a mixture of slow and fast servers, the BIG-IP system may not load balance traffic evenly.

Note: For more information, refer to SOL7208: Overview of the OneConnect profile and SOL2055: The BIG-IP system may not load balance traffic evenly when OneConnect is in use.

Monitors

Health monitors verify connections for pool members and nodes on an ongoing basis, and mark pool members and nodes down when they fail to respond. When a device is marked down, the BIG-IP system redirects traffic to another pool member or node. This can result in uneven load balancing statistics across pool members. If you notice that traffic is not evenly distributed across BIG-IP pools members, as expected, you can verify whether one or more pool members or nodes are being marked down by a BIG-IP health monitor. To do so, review the /var/log/ltm file and look for error messages that appear similar to the following example:

01070638:3: Pool member 172.24.10.102:80 monitor status down.
01070638:3: Pool member 172.24.10.102:443 monitor status down.
01070638:3: Pool member 172.24.10.103:80 monitor status down.
01070638:3: Pool member 172.24.10.102:443 monitor status up.
01070638:3: Pool member 172.24.10.102:80 monitor status down.
01070638:3: Pool member 172.24.10.103:8080 monitor status down.

iRules

iRule configurations often control how the BIG-IP system distributes connections, and some iRules can result in uneven load balancing across pool members. For example, the following iRule uses the node command, which causes the specified server node to be used directly, thus bypassing the load balancing method defined in the pool:

when HTTP_REQUEST { if { [HTTP::uri] ends_with ".jpg" } { node 10.10.10.1 80 }

UDP traffic

When multiple UDP datagrams are sent from the same IP address and port to a UDP virtual server, the BIG-IP system, by default, sends all datagrams that arrive within the Idle Timeout period configured in the associated UDP profile to the same pool member to which the preceding datagram from the same source was sent. This can result in uneven load balancing statistics across pool members.

Note: For more information, refer to SOL7535: Overview of the UDP profile. For information about modifying the default behavior, refer to SOL3605: Configuring the BIG-IP system to load balance UDP packets individually.

Clustered Multiprocessing

Clustered Multiprocessing (CMP) can be a significant factor in how the BIG-IP system distributes connections across active Traffic Management Microkernel (TMM) instances, which in turn can affect distribution to pool members.

Note: For more information about the effects of CMP on traffic distribution, refer to SOL7751: Overview of Clustered Multi-Processing (9.x - 10.x)

Known issues

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)