Manual Chapter : BIG-IP Reference Guide version 4.2: Configuring the High-Level Network

Applies To:

Show Versions Show Versions

BIG-IP versions 1.x - 4.x

  • 4.2 PTF-10, 4.2 PTF-09, 4.2 PTF-08, 4.2 PTF-07, 4.2 PTF-06, 4.2 PTF-05, 4.2 PTF-04, 4.2 PTF-03, 4.2 PTF-02, 4.2 PTF-01, 4.2.0
Manual Chapter


4

Configuring the High-Level Network



Introduction

This chapter describes the elements that make up the high-level network of BIG-IP. The high-level network is distinct from the base network, which is configured with the Setup utility.

Just as the base network is built on the BIG-IP interfaces, the high-level network is built on the load balancing pool. The high-level network includes all of the properties associated with pools, as well as virtual servers, and nodes. It can also include pool-selection rules, as well as services, proxies, SNATs, NATs, and health monitor associations for nodes.

  • Pools represent groups of nodes that can receive traffic from BIG-IP according to a specified load balancing method.
  • Rules enable a virtual server to choose among multiple pools based on selection criteria. In the form of cache rules, they also allow the virtual server to cache content intelligently based on frequency of access.
  • Proxies are used for SSL acceleration and content conversion (akamaization) where these features are present.
  • Virtual Servers can be of four types: standard, wildcard, network, or forwarding.
  • Proxies are used for SSL acceleration and content conversion (akamaization) where these features are present.
  • Services correspond to the ports (for example, port 80 and port 443) specified for nodes as they are defined in load balancing pools. Service options include enabling/disabling of service, connection limits, and timeouts for UDP and TCP.
  • SNATs and NATs are secure network address translations and network address translations, respectively, and are used primarily to allow servers to establish outgoing connections as clients.
  • Health monitors are status checking devices that may be configured by the user, and are associated with nodes for ongoing monitoring.

The remaining sections of this chapter describe each of these elements and the procedures for configuring them for BIG-IP.

Pools

A load balancing pool is the primary object in the high-level network. A pool is a set of devices grouped together to receive traffic according to a load balancing method. When you create a pool, the members of the pool become visible nodes on the high-level network and can acquire the various properties that attach to nodes. Pools can be accessed through a virtual server, either directly or through a rule, which chooses among two or more pools. The Rules section of this chapter describes several ways to select pools using rules.

You can use the Configuration utility or the bigpipe pool command to create, delete, modify, or display the pool definitions on the BIG-IP.

When creating a pool, you can configure various pool attributes. Table 4.1 lists the attributes you can configure for a pool.

The attributes of a pool

Pool Attributes

Description

Required or Optional?

Pool name

You can define the name of the pool.

Required

Member specification

You can define each network device, or node, that is a member of the pool.

Required for non-forwarding pools

Load balancing method

You can define a specific load balancing mode for a pool, and you can configure priority-based member activation. Various pools can be configured with different load balancing modes.

Required
(Default=Round Robin)

Persistence method

You can define a specific persistence method for a pool. You can have various pools configured with different persistence methods.

Optional

HTTP redirection

You can redirect HTTP requests to a fallback host, protocol, port, or URI path.

Optional

HTTP header insertion

You can configure a pool to insert a header into an HTTP request. For example, the header could include an original client IP address, to preserve the address during a SNAT connection.

Optional

Quality of Service (QoS) level

You can configure a pool to set a specific QoS level within a packet, based on the targeted pool.

Optional

Type of Service (ToS) level

You can configure a pool to set a specific ToS level within a packet, based on the targeted pool.

Optional

Disabling of SNAT and NAT connections

You can configure a pool so that SNATs and NATs are automatically disabled for any connections using that pool.

Optional

Forwarding

You can configure a forwarding pool, which causes a connection to be forwarded, using IP routing, instead of load balanced. Creating a forwarding pool allows you to use pool-based features for traffic that should be forwarded.

Optional

Working with pools

You can manage pools using either the web-based Configuration utility or the command-line interface. This section describes how to create, delete, modify, or display a pool, using each of these configuration methods.

To create a pool using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. Click the Add button.
    The Add Pool screen opens.
  3. In the Add Pool screen, fill in the fields to create the new pool and configure its attributes.
  4. Click Done.

To create a pool from the command line

To define a pool and configure its attributes from the command line, use the following syntax:

b pool <pool_name> { member <member_definition> ... member <member_definition> }

For example, if you want to create the pool my_pool with two members, you would type the following command:

b pool my_pool { member 11.12.1.101:80 member 11.12.1.100:80 }

Use the elements shown in Table 4.2 to construct pools from the command line. These elements correspond to the pool attributes listed in Table 4.1.

Elements for pool construction

Pool Element

Syntax

Pool name

A string from 1 to 31 characters, for example: new_pool

Member definition

member <ip_address>:<service> [ratio <value>] [priority <value>]

lb_method_specificaton

lb_method [rr | ratio | fastest | least_conn | predictive | observed | ratio_member | fastest_member | least_conn_member | observed_member | predictive_member | dynamic_ratio]

persist_mode_specification

persist_mode [ simple | cookie | ssl | sip | sticky | msrdp]

fallback_host_specification

fallback <fallback_host>

fallback_protocol_specification

fallback <fallback_protocol>

fallback_port_specification

fallback <fallback_port>

fallback_path_specification

fallback <fallback_path>

header insert

header_insert <quoted string>

link_qos to client level

link_qos to client <level>

link_qos to server level

link_qos to server <level>

ip_tos to client level

ip_tos to client <level>

ip_tos to server level

ip_tos to server <level>

snat disable

snat <ip address> disable

nat disable

nat <ip address> disable

forward

forward

To delete a pool from the command line

To delete a pool, use the following syntax:

b pool <pool_name> delete

You must remove all references to a pool before you can delete a pool.

To modify a pool from the command line

In addition to adding nodes to a pool or deleting nodes from a pool, you can also modify pool attributes. You can add a new member to a pool, change the load-balancing mode, or delete a member from a pool.

For example, to change the default load-balancing mode from Round Robin to Predictive and add two new members to the pool, use a command such as the following:

b pool <pool_name> { \

lb_method predictive \

member 11.12.1.101:80 \

member 11.12.1.100:80 }

To display one or more pools from the command line

Use the following syntax to display all pools:

b pool show

Use the following syntax to display a specific pool:

b pool <pool_name> show

The following sections describe the various pool attributes that you can configure for a pool.

Pool Name

The most basic attribute you can configure for a pool is the pool name. Pool names are case-sensitive and may contain letters, numbers, and underscores (_) only. Reserved keywords are not allowed.

Each pool that you define must have a unique name.

Member specification

For each pool that you create, you must specify the nodes that are to be members of that pool. Nodes must be specified by their IP addresses.

Load balancing method

Load balancing is an integral part of the BIG-IP. Configuring load balancing on the BIG-IP means determining your load balancing scenario, that is, which node should receive a connection hosted by a particular virtual server. Once you have decided on a load balancing scenario, you can specify the appropriate load balancing method for that scenario.

The load balancing method is a pool attribute and consists of two properties: the load-balancing mode and priority-based activation.

Load balancing modes

A load balancing mode is an algorithm or formula that the BIG-IP uses to deterimine the node to which traffic will be sent. Individual load balancing modes take into account one or more dynamic factors, such as current connection count. Because each application of the BIG-IP is unique, and node performance depends on a number of different factors, we recommend that you experiment with different load balancing modes, and select the one that offers the best performance in your particular environment.

The default load balancing mode on the BIG-IP is Round Robin, which simply passes each new connection request to the next server in line. All other load balancing modes take server capacity and/or status into consideration.

If the equipment that you are load balancing is roughly equal in processing speed and memory, Round Robin mode works well in most configurations. If you want to use the Round Robin mode, you can skip the remainder of this section, and begin configuring other pool attributes that you want to add to the basic pool configuration.

If you are working with servers that differ significantly in processing speed and memory, you may want to switch to Ratio mode or to one of the dynamic modes.

The individual load balancing modes are as follows.

Round Robin

This is the default load balancing mode. Round Robin mode passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. Round Robin mode works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory.

Ratio

BIG-IP distributes connections among machines according to ratio weights that you define, where the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine. This is a static load balancing mode, basing distribution on static user-assigned ratio weights that are proportional to the capacity of the servers.

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation). Member-based calculation is specified by the extension ratio_member. This distinction is especially important; in Ratio Member mode, the actual ratio weight is a member attribute in the pool definition, whereas in Ratio mode, the ratio weight is an attribute of the node.

Dynamic ratio

Dynamic Ratio mode is like Ratio mode except that ratio weights are based on continuous monitoring of the servers and are therefore continually changing.

This is a dynamic load balancing mode, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

Dynamic Ratio mode is used specifically for configuring RealNetworks RealServer platforms, Windows platforms equipped with Windows Management Instrumentation (WMI), or any server equipped with an SNMP agent such as the UC Davis SNMP agent or Windows 2000 Server SNMP agent. To install and configure the necessary server software for these systems, refer to Configuring servers and the BIG-IP for Dynamic Ratio load balancing, on page 4-10.

Fastest

Fastest mode passes a new connection based on the fastest response of all currently active nodes. Fastest mode may be particularly useful in environments where nodes are distributed across different logical networks.

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation). The variant of the mode using member-based calculation is distinguished by the extension fastest_member.

Least Connections

Least Connections mode is relatively simple in that the BIG-IP passes a new connection to the node that has the least number of current connections. Least Connections mode works best in environments where the servers or other equipment you are load balancing have similar capabilities.

This is a dynamic load balancing mode, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation). The variant of the mode using member-based calculation is distinguished by the extension least_conn_member.

Observed

Observed mode uses a combination of the logic used in the Least Connection and Fastest modes. In Observed mode, nodes are ranked based on a combination of the number of current connections and the response time. Nodes that have a better balance of fewest connections and fastest response time receive a greater proportion of the connections. Observed mode also works well in any environment, but may be particularly useful in environments where node performance varies significantly.

This is a dynamic load balancing mode, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation). The variant of the mode using member-based calculation is distinguished by the extension observed_member.

Predictive

Predictive mode also uses the ranking methods used by Observed mode, where nodes are rated according to a combination of the number of current connections and the response time. However, in Predictive mode, the BIG-IP analyzes the trend of the ranking over time, determining whether a node's performance is currently improving or declining. The nodes with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. Predictive mode works well in any environment.

This is a dynamic load balancing mode, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation). The variant of the mode using member-based calculation is distinguished by the extension predictive_member.

Setting the load balancing mode for a pool

A load balancing mode is specified as a pool attribute when a pool is defined and may be changed by changing this pool attribute. For information about configuring a pool, see Working with pools, on page 4-3. The following example describes how to configure a pool to use Ratio Member load balancing. Note that for Ratio Member mode, in addition to changing the load balancing attribute, you must assign a ratio weight to each member node.

Tip: The default ratio weight for a node is 1. If you keep the default ratio weight for each node in a virtual server mapping, the nodes receive an equal proportion of connections as though you were using Round Robin load balancing.

To configure the pool and load balancing mode using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
    • If you are adding a new pool, click the Add button.
      The Add Pool screen opens.
    • If you are changing an existing pool, click the pool in the Pools list.
      The Pool Properties screen opens.
  2. In the Add Pool screen or Pool Properties screen, configure the pool attributes. For additional information about defining a pool, click the Help button.

Note: Round Robin is the default load balancing mode and never needs to be set unless you are returning to it from a non-default mode.

To switch a pool to ratio_member mode using the Configuration utility

  1. In the Current Members list, click the member you want to edit.
  2. Click the Back button (<<) to pull the member into the resources section.
  3. Change or add the ratio value for the member.
  4. Click the Add button (>>) to add the member back to the Current Members list.
  5. Click Done.

To switch a pool to ratio_member mode from the command line

To switch a pool to ratio_member load balancing, use the modify keyword with the bigpipe pool command. For example, if you want to change the pool my_pool, to use the ratio_member load balancing mode and to assign each member its ratio weight, you can type the following command:

b pool my_pool modify { lb_method ratio_member member 11.12.1.101:80 ratio 1 member 11.12.1.100:80 ratio 3}

Setting ratio weights for node addresses

The default ratio setting for any node address is 1. If you use the Ratio (as opposed to Ratio Member) load balancing mode, you must set a ratio other than 1 for at least one node address in the configuration. If you do not change at least one ratio setting, the load balancing mode has the same effect as the Round Robin load balancing mode.

To set ratio weights using the Configuration utility

  1. In the navigation pane, click Nodes.
  2. In the Nodes list, click the Node Addresses tab.
    The Node Addresses screen opens.
  3. In the Node Addresses screen, click the Address of the node.
    The Global Node Address screen opens.
  4. In the Ratio box, type the ratio weight of your choice.
  5. Click the Apply button to save your changes.

To set ratio weights from the command line

The bigpipe ratio command sets the ratio weight for one or more node addresses:

b ratio <node_ip> [<node_ip>...] <ratio weight>

For example, the following command sets the ratio weight to 3 for a specific node address:

b ratio 192.168.103.20 3

Note: The <weight> parameter must be a whole number, equal to or greater than 1.

Displaying ratio weights for node addresses

To display the ratio weights for all node addresses

The following command displays the current ratio weight settings for all node addresses.

b ratio show

The command displays the output shown in Figure 4.1.

Figure 4.1 Ratio weights for node addresses

 192.168.200.51    ratio = 3    
192.168.200.52 ratio = 1

To display ratio weight for specific node addresses

Use the following syntax to display the ratio setting for one or more node addresses:

b ratio <node_ip> [...<node_ip>] show

Configuring servers and the BIG-IP for Dynamic Ratio load balancing

You can configure Dynamic Ratio load balancing on RealNetworks RealServer platforms, Windows platforms equipped with Windows Management Instrumentation (WMI), or any server equipped with an SNMP agent such as the UC Davis SNMP agent or Windows 2000 Server SNMP agent.

Configuring RealNetwork RealServers

For RealNetworks, we provide a monitor plugin for the server that gathers the necessary metrics. Configuring a RealServer for Dynamic Ratio load balancing consists of four tasks:

  • Installing the monitor plugin on the RealServer
  • Configuring a real_server health check monitor on the BIG-IP
  • Associating the health check monitor with the server to gather the metrics
  • Creating or modifying the server pool to use Dynamic Ratio load balancing

To install the monitor plugin on the RealServer

  1. Download the monitor plugin F5RealMon.dll from the BIG-IP. The plugin is located in /usr/contrib/f5/isapi. (The URL is https://<bigip_address>/doc/rsplugin/f5realmon.dll.)
  2. Copy f5realmon.dll to the RealServer Plugins directory. (For example, C:\Program Files\RealServer\Plugins.)
  3. If the RealServer process is running, restart it.

To configure a real_server monitor for the server node

Using the Configuration utility or the bigpipe command, create a health-check monitor using the real_server monitor template. The real_server monitor template is shown in the Figure 4.2.

Figure 4.2 real_server monitor template

 monitor type real_server {
interval 5
timeout 16
dest *.12345
method "GET"
cmd "GetServerStats"
metrics "ServerBandwidth:1.5,CPUPercentUsage,MemoryUsage,
TotalClientCount"
agent "Mozilla/4.0 (compatible: MSIE 5.0; Windows NT)
}

The real_server monitor template can be used as is, without modifying any of the attributes. Alternatively, you can add metrics and modify metric attribute values. To do this, you need to create a custom monitor. For example:

b monitor my_real_server '{ use real_server metrics "ServerBandwidth:2.0" }'

The complete set of metrics and metric attribute default values is shown in Table 4.3.

real_server monitor metrics

Metric

Default Coefficient

Default Threshold

ServerBandwidth (Kbps)

1.0

10,000

CPUPercentUsage

1.0

80

MemoryUsage (Kb)

1.0

100,000

TotalClientCount

1.0

1,000

RTSPClientCount

1.0

500

HTTPClientCount

1.0

500

PNAClientCount

1.0

500

UDPTransportCount

1.0

500

TCPTransportCount

1.0

500

MulticastTransportCount

1.0

500

The metric coefficient is a factor determining how heavily the metric's value counts in the overall ratio weight calculation. The metric threshold is the highest value allowed for the metric if the metric is to have any weight at all. To understand how to use these values, it is necessary to understand how the overall ratio weight is calculated. The overall ratio weight is the sum of relative weights calculated for each metric. The relative weights, in turn, are based on three factors:

  • the value for the metric returned by the monitor
  • the coefficient value
  • the threshold value

    Given these values, the relative weight is calculated as follows:

    w=((threshold-value)/threshold)*coefficient

You can see that the higher the coefficient, the greater the relative weight calculated for the metric. Similarly, the higher the threshold, the greater the relative weight calculated for any metric value that is less than the threshold. (When the value reaches the threshold, the weight goes to zero.)

Note that the default coefficient and default threshold values shown in Table 4.3 are metric defaults, not template defaults. The template defaults take precedence over the metric defaults, just as user-specified values in the custom real_server monitor take precedence over the template defaults. For example, in Figure 4.2, the template specifies a coefficient value of 1.5 for ServerBandwidth and no value for the other metrics. This means that the template will use the template default of 1.5 for the ServerBandwidth coefficient and the metric default of 1 for the coefficients of all other metrics. However, if a custom monitor my_real_server were configured specifying 2.0 as the ServerBandwidth coefficient, this user-specified value would override the template default.

The syntax for specifying non-default coefficient or threshold values is:

<metric>:<coefficient |<*>:<threshold>

The following examples show how to specify a coefficient value only, a threshold value only, and a coefficient and a threshold value, respectively.

b monitor my_real_server '{ use real_server metrics CPUPercentUsage:1.5 }'

b monitor my_real_server '{ use real_server metrics CPUPercentUsage:*:70 }'

b monitor my_real_server '{ use real_server metrics CPUPercentUsage:1.5:70 }'

Metric coefficient and threshold are the only non-template defaults. If a metric not in the template is to be added to the custom monitor, it must be added to the metric list:

b monitor my_real_server '{ use real_server metrics "HTTPClientCount" }'

To associate the monitor with the member node

Associate the custom health check monitor with the server node, creating an instance of the monitor for that node:

b node <node_addr> monitor use my_real_server

To set the load balancing method to Dynamic Ratio

Create or modify the load balancing pool to which the server belongs to use Dynamic Ratio load balancing:

b pool <pool_name> { lb_method dynamic_ratio <member definition>... }

Configuring Windows servers with WMI

For Windows, BIG-IP provides a Data Gathering Agent F5Isapi.dll for the server. Configuring a Windows platform for Dynamic Ratio load balancing consists of four tasks:

  • Installing the Data Gathering Agent F5Isapi.dll on the server
  • Configuring a wmi health check monitor on the BIG-IP
  • Associating the health check monitor with the server to gather the metrics
  • Creating or modifying the server pool to use Dynamic Ratio load balancing

To install the Data Gathering Agent (F5Isapi) on the server

  1. Download the Data Gathering Agent (F5Isapi.dll) from the BIG-IP. The plugin is located in /usr/contrib/f5/isapi. (The URL is https://<bigip_address>/doc/isapi/f5isapi.dll.)
  2. Copy f5isapi.dll to the directory C:\Inetpub\scripts.
  3. Open the Internet Services Manager.
  4. In the left pane of the Internet Services Manager, open the folder <machine_name>\Default Web Site\Script, where <machine_name> is the name of the server you are configuring. The contents of Scripts folder opens in the right pane.
  5. In the right pane, right click F5Isapi.dll and select Properties.
    The Properties dialog box for F5Isapi.dll opens.
  6. Deselect Logvisits. (Logging of each visit to the agent quickly fills up the log files.)
  7. Click the File Security tab.
    The File Security options appears.
  8. In the Anonymous access and authentication control group box, click Edit.
    The Authentication Methods dialog box opens.
  9. In the Authentication methods dialog box, clear all check boxes, then select Basic Authentication.
  10. In the Authentication methods dialog box, click OK to accept the changes.
  11. In the Properties dialog box, click Apply.
  12. The WMI Data Gathering Agent is now ready to be used.

To configure a wmi monitor for the server node

Using the Configuration utility or the bigpipe command, create a health check monitor using the wmi monitor template. The wmi monitor template is shown in Figure 4.3.

Figure 4.3 wmi monitor template

 monitor type wmi {
interval 5
timeout 16
dest *:12346
username ""
password ""
method "POST"
urlpath "/scripts/F5Isapi.dll"
cmd "GetCPUInfo, GetDiskInfo, GetOSInfo"
metrics "LoadPercentage, DiskUsage, PhysicalMemoryUsage:1.5,
VirtualMemoryUsage:2.0"
post "<input type='hidden' name='RespFormat' value='HTML'>"
agent "Mozilla/4.0 (compatible: MSIE 5.0; Windows NT)
}

The monitor template contains default values for all the attributes. These are template defaults. In creating a custom monitor from the template, the only default values you are required to change are the null values for username and password. For example:

b monitor my_wmi '{ use wmi username "dave" password "$getm" }'

You may also add commands and metrics and modify metric attribute values. The complete set of commands, associated metrics, and metric attribute default values are shown in Table 4.4.

wmi monitor commands and metrics

Command

Metric

Default Coefficient

Default

Threshold

GetCPUInfo

LoadPercentage (%)

1.0

80

GetOSInfo

PhysicalMemoryUsage (%)

1.0

80


VirtualMemoryUsage (%)

1.0

80


NumberRunningProcesses

1.0

100

GetDiskInfo

DiskUsage (%)

1.0

90

GetPerfCounters

TotalKBytesPerSec

1.0

10,000


ConnectionAttemptsPerSec

1.0

500


CurrentConnections

1.0

500


GETRequestsPerSec

1.0

500


PUTRequestsPerSec

1.0

500


POSTRequestsPerSec

1.0

500


AnonymousUsersPerSec

1.0

500


CurrentAnonymousUsers

1.0

500


NonAnonymousUsersPerSec

1.0

500


CurrentNonAnonymousUser

1.0

500


CGIRequestsPerSec

1.0

500


CurrentCGIRequests

1.0

500


ISAPIRequestsPerSec

1.0

500


CurrentISAPIRequests

1.0

500

GetWinMediaInfo

AggregateReadRate

1.0

10,000 Kbps


AggregateSendRate

1.0

10,000 Kbps


ActiveLiveUnicastStreams

1.0

1000


ActiveStreams

1.0

1000


ActiveTCPStreams

1.0

1000


ActiveUDPStreams

1.0

1000


AllocatedBandwidth

1.0

10,000 Kbps


AuthenticationRequests

1.0

1000


AuthenticationsDenied

1.0

100


AuthorizationRequests

1.0

1000


AuthorizationsRefused

1.0

100


ConnectedClients

1.0

500


ConnectionRate

1.0

500


HTTPStreams

1.0

1000


HTTPStreamsReadingHeader

1.0

500


HTTPStreamsStreamingBody

1.0

500


LateReads

1.0

100


PendingConnections

1.0

100


PluginErrors

1.0

100


PluginEvents

1.0

100


SchedulingRate

1.0

100


StreamErrors

1.0

100


StreamTerminations

1.0

100


UDPResendRequests

1.0

100


UDPResendsSent

1.0

100

For more information about the metric coefficients and thresholds, refer to the description accompanying Table 4.3, real_server monitor metrics, on page 4-11. Note that for a wmi monitor, you can add commands. To do this, simply add them to the cmd list.

To associate the monitor with the member node

Associate the custom health check monitor with the server node, creating an instance of the monitor for that node:

b node <node_addr> monitor use my_wmi

To set the load balancing mode to Dynamic Ratio

Use the following syntax to create or modify the load balancing pool to which the server belongs to use Dynamic Ratio load balancing:

b pool <pool_name> { lb_method dynamic_ratio <member definition>...}

Configuring SNMP servers

The BIG-IP includes an SNMP data collecting agent that can query remote SNMP agents of various types, including the UC Davis agent and the Windows 2000 Server agent. Configuring a server to use its SNMP agent for Dynamic Ratio load balancing consists of three tasks:

  • Configuring a health check monitor, using either the Configuration utility or the bigpipe command
  • Associating the health check monitor with the server to gather the metrics
  • Creating or modifying the server pool to use Dynamic Ratio load balancing

    BIG-IP provides two templates that you can use to create a health monitor for a server that uses an SNMP agent. These two monitor templates are:

  • snmp_dca
    Use this template when you want to use default values or specify new values for CPU, memory, and disk metrics. When using this template, you can also specify values for other types of metrics that you wish to gather.
  • snmp_dca_base
    Use this template when you want to use default values or specify values for metrics other than CPU, memory, and disk usage. When using this template, values for CPU, memory, and disk metrics are omitted.

Note: For a description of these templates and the default values for each metric, see Working with templates for EAV monitors, on page 4-141.

Figure 4.4 shows a monitor based on the snmp_dca monitor template. This monitor uses the default metric values. A user can optionally specify variables for user-defined metrics.

Figure 4.4 A monitor based on the snmp_dca template

 monitor my_snmp_dca     
'{ use snmp_dca
interval 10
timeout 30
dest *:161
agent_type "UCD"
cpu_coefficient "1.5"
cpu_threshold "80"
mem_coefficient "1.0"
mem_threshold "70"
disk_coefficient "2.0"
disk_threshold "90"
USEROID ""
USEROID_COEFFICIENT "1.0"
USEROID_THRESHOLD "90"
}'

Figure 4.5 shows a monitor based on the snmp_dca_base monitor template. This monitor uses the default metric values.

Figure 4.5 A monitor based on the snmp_dca_base template

 monitor my_snmp_dca_base     
'{ use snmp_dca_base
interval 10
timeout 30
dest *:161

USEROID ""
USEROID_COEFFICIENT "1.0"
USEROID_THRESHOLD "90"
}'

Note: Note that in the above examples, the user-defined variables are specified as USEROID, USEROID_COEFICIENT, and USEROID_THRESHOLD. You can create any variable names you want. Although the values shown in the above examples are entered in uppercase, uppercase is not required.

To configure a monitor based on either the snmp_dca or snmp_dca_base template, you can use either the Configuration utility or the bigpipe command.

Note: The default agent type specified in the snmp_dca template is UC Davis. When configuring a monitor for a Windows 2000 server, you must change the agent type to Windows 2000.

To configure an SNMP monitor using the Configuration utility

  1. In the Navigation pane, click Monitors.
  2. Click the Add button.
    This displays the Configure Monitor Name and Parent screen.
  3. Enter a unique name for the monitor in the Name box and select a template from the Inherits from box. If you want the monitor to include CPU, memory, disk, and user metrics, select the snmp_dca template. If you want the monitor to include user metrics only, select the snmp_dca_base template.
  4. Click Next. This displays the Configure Basic Properties screen.
  5. Retain or change the values in the Interval and Timeout boxes.
  6. Click Next. This displays the Configure EAV SNMP DCA Monitor screen.
  7. Retain or change the values for CPU, memory, and disk use. Also note that in the snmp_dca template, the default value for the Agent Type property is UCD. To configure a monitor for a Windows 2000 agent, change this value to WIN2000.
  8. Click Next.
    This displays the Configure EAV Variables screen.
  9. If you are specifying user-defined metrics, configure the EAV variables by specifying a unique name and a value for each Name/Value pair.

    The three variables (that is, Name/Value pairs) correspond to OID, coefficient, and threshold. Note that if the value of the OID variable is an absolute value, verify that the user-defined threshold value is also an absolute value. If the threshold value is not absolute, BIG-IP might not factor the value into the load calculation. The default user-defined threshold value is 90.
  10. Click Next.
    This displays the Configure Destination Address and Service (Alias) screen. We recommend that you use the default values shown here.
  11. Click Done.

To configure an SNMP monitor using the bigpipe command

When configuring an SNMP monitor using the bigpipe command, you can use the default CPU, memory, and disk coefficient and threshold values specified in the templates, or you can change the default values. Optionally, you can specify coefficient and threshold values for gathering other types of data. Note that if the monitor you are configuring is for a type of SNMP agent other than UC Davis, you must specify the agent type as an argument to the bigpipe command.

The following command-line examples show various ways to configure an SNMP monitor. Note that although arguments for user-defined metrics are shown in uppercase, uppercase is not required.

To configure a monitor for a UC Davis SNMP agent, using default CPU, memory, and disk use values, use the bigpipe monitor command, as in the following example.

b monitor my_snmp_dca '{ use snmp_dca }'

To configure a monitor for a UC Davis SNMP agent, using all default CPU, memory threshold, and disk use values and specifying a non-default memory coefficient value, use the bigpipe monitor command, as in the following example.

b monitor my_snmp_dca '{ use snmp_dca mem_coefficient "1.5" }'

To configure a monitor for a UC Davis SNMP agent, using default CPU, memory threshold, and disk use values and specifying non-default memory coefficient and user values, use the bigpipe monitor command, as in the following example.

b monitor my_snmp_dca '{ use snmp_dca mem_coefficient "1.5"/
USEROID ".1.3.6.1.4" USEROID_COEFFICIENT "1.5" USEROID_THRESHOLD/
"80" }'

To configure a monitor for a UC Davis SNMP agent, omitting CPU, memory, and disk use values and using default user coefficient and user threshold values (1.0 and 90 respectively), use the bigpipe monitor command, as in the following example.

b monitor my_snmp_dca '{ use snmp_dca_base USEROID ".1.3.6.1.4"}'

To configure a monitor for a UC Davis SNMP agent, omitting CPU, memory, and disk use values and specifying non-default user values, use the bigpipe monitor command, as in the following example.

b monitor my_snmp_dca_base '{ use snmp_dca_base USEROID/
".1.3.6.1.4" USEROID_COEFFICIENT/ "1.5" USEROID_THRESHOLD "80" )'

To configure a monitor for a Windows 2000 SNMP agent, using default CPU, memory, and disk use values, use the bigpipe monitor command, as in the following example.

b monitor my_win2000_snmp_dca '{use snmp_dca agent_type "WIN2000"}'

To associate the health check monitor with the member node

Use the following syntax to associate the custom health check monitor with the server node and create an instance of the monitor for that node:

b node <node_addr> monitor use my_snmp_dca

To set the load balancing method to Dynamic Ratio

Use the following syntax to create or modify the load balancing pool to which the server belongs to use Dynamic Ratio load balancing:

b pool <pool_name> { lb_method dynamic_ratio <member definition>... }

Priority-based member activation

You can load balance traffic across all members of a pool or across only members that are currently activated according to their priority number. In priority-based member activation, each member in a pool is assigned a priority number that places it in a priority group designated by that number. With all nodes available (meaning they are enabled, marked up, and have not exceeded their connection limit), the BIG-IP distributes connections to all nodes in the highest priority group only, that is, the group designated by the highest priority number. The min_active_members value determines the minimum number of members that must remain available for traffic to be confined to that group. If the number of available nodes in the highest priority group goes below the minimum number, the BIG-IP also distributes traffic to the next higher priority group, and so on.

Figure 4.6 Sample pool configuration for priority based member activation

 pool my_pool {
lb_mode fastest
min_active_members 2
member 10.12.10.1:80 priority 3
member 10.12.10.2:80 priority 3
member 10.12.10.3:80 priority 3

member 10.12.10.4:80 priority 2
member 10.12.10.5:80 priority 2
member 10.12.10.6:80 priority 2

member 10.12.10.7:80 priority 1
member 10.12.10.8:80 priority 1
member 10.12.10.9:80 priority 1
}

The configuration shown in Figure 4.6 has three priority groups, 3, 2, and 1. Connections are first distributed to all nodes with priority 3. If fewer than two priority 3 nodes are available, traffic is directed to the priority 2 nodes as well. If both the priority 3 group and the priority 2 group have fewer than two nodes available, traffic is directed to the priority 1 group as well. The BIG-IP continuously monitors the higher priority groups, and each time a higher priority group once again has the minimum number of available nodes, the BIG-IP again limits traffic to that group.

Warning: If you set the load balancing mode to Ratio (as opposed to Ratio Member), you must define the ratio settings for each node address.

Persistence

If you are setting up an e-commerce or other type of transaction-oriented site, you may need to configure persistence on the BIG-IP. Persistence is one of the pool attributes listed in Table 4.1.

Whether you need to configure persistence or not simply depends on how you store client-specific information, such as items in a shopping cart, or airline ticket reservations. For example, you may store the airline ticket reservation information in a back-end database that all nodes can access, or on the specific node to which the client originally connected, or in a cookie on the client's machine.

If you store client-specific information on specific nodes, you need to configure persistence. When you turn on persistence, returning clients can bypass load balancing and instead can go to the node where they last connected in order to get to their saved information.

The BIG-IP tracks information about individual persistent connections, and keeps the information only for a given period of time. The way in which persistent connections are identified depends on the type of persistence.

Types of persistence

The types of persistence are:

  • Simple persistence
    Simple persistence supports TCP and UDP protocols, and tracks connections based only on the client IP address.
  • HTTP cookie persistence
    HTTP cookie persistence uses an HTTP cookie stored on a client's computer to allow the client to reconnect to the same server previously visited at a web site.
  • SSL persistence
    SSL persistence is a type of persistence that tracks SSL connections using the SSL session ID. Even when the client's IP address changes, the BIG-IP still recognizes the connection as being persistent based on the session ID.
  • SIP Call-ID persistence
    SIP persistence is a type of persistence used for proxy servers that receive Session Initiation Protocol (SIP) messages sent through UDP. SIP is a protocol that enables real-time messaging, voice, data, and video.
  • Destination address affinity (sticky persistence)
    Destination address affinity directs requests for a certain destination to the same proxy server, regardless of which client the request comes from.
  • WTS persistence
    Windows Terminal Server (WTS) persistence tracks and load balances connections between WTS client-server configurations.

Note: All persistence methods are properties of pools.

Persistence options

When setting up persistence, you can enable either of the following two options:

  • Persistence across virtual servers with the same address
    Persistence across virtual servers with the same address causes the BIG-IP to maintain persistence only when the virtual server hosting the connection has the same virtual address as the virtual server hosting the initial persistent connection.
  • Persistence across all virtual servers
    Persistence across all virtual servers causes the BIG-IP to maintain persistence for all connections requested by the same client, regardless of which virtual server hosts each individual connection initiated by the client.

Simple persistence

Simple persistence tracks connections based only on the client IP address. When a client requests a connection to a virtual server that supports simple persistence, the BIG-IP checks to see if that client previously connected, and if so, returns the client to the same node.

You may want to use SSL persistence and simple persistence together. In situations where an SSL session ID times out, or where a returning client does not provide a session ID, you may want the BIG-IP to direct the client to the original node based on the client's IP address. As long as the client's simple persistence record has not timed out, the BIG-IP can successfully return the client to the appropriate node.

Persistence settings for pools apply to all protocols. When the persistence timer is set to a value greater than 0, persistence is on. When the persistence timer is set to 0, persistence is off.

To configure simple persistence for pools using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. Select the pool for which you want to configure simple persistence.
    The Pool Properties screen opens.
  3. Click the Persistence tab.
    The Persistence Properties screen opens.
  4. In the Persistence Type section, click the Simple button.
    Type the following information:

    • Timeout (seconds)
      Set the number of seconds for persistence on the pool. (This option is not available if you are using rules.)
    • Mask
      Set the persistence mask for the pool. The persistence mask determines persistence based on the portion of the client's IP address that is specified in the mask.
  5. Click the Apply button.

To configure simple persistence for pools from the command line

You can use the bigpipe pool command with the modify keyword to set simple persistence for a pool. Note that a timeout greater than 0 turns persistence on, and a timeout of 0 turns persistence off.

b pool <pool_name> modify { \

persist_mode simple \

simple_timeout <timeout> \

simple_mask <ip_mask> }

For example, if you want to set simple persistence on the pool my_pool, type the following command:

b pool my_pool modify { \

persist_mode simple \

simple_timeout 3600 \

simple_mask 255.255.255.0 }

Using a simple timeout and a persist mask on a pool

The persist mask feature works only on pools that implement simple persistence. By adding a persist mask, you identify a range of client IP addresses to manage together as a single simple persistent connection when connecting to the pool.

To apply a simple timeout and persist mask using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up simple persistence.
    The properties screen opens.
  3. Click the Persistence tab.
    The Persistence screen opens.
  4. Select Simple mode.
  5. In the Timeout box, type the timeout in seconds.
  6. In the Mask box, type the persist mask you want to apply.
  7. Click the Apply button.

To apply a simple timeout and persist mask from the command line

The complete syntax for the command is:

b pool <pool_name> modify { \

[<lb_mode_specification>] \

persist_mode simple \

simple_timeout <timeout> \

simple_mask <dot_notation_longword> }

For example, the following command would keep persistence information together for all clients within a C class network that connect to the pool my_pool:

b pool my_pool modify { \

persist_mode simple \

simple_timeout 1200 \

simple_mask 255.255.255.0 }

You can turn off a persist mask for a pool by using the none option in place of the simple_mask mask. To turn off the persist mask that you set in the preceding example, use the following command:

b pool my_pool modify { simple_mask none }

To display all persistence information for the pool named my_pool, use the show option:

b pool my_pool persist show

HTTP cookie persistence

You can set up the BIG-IP to use HTTP cookie persistence. This method of persistence uses an HTTP cookie stored on a client's computer to allow the client to reconnect to the same server previously visited at a web site.

There are four types of cookie persistence available:

  • Insert mode
  • Rewrite mode
  • Passive mode
  • Hash mode

The mode you choose affects how the cookie is handled by the BIG-IP when it is returned to the client.

Insert mode

If you specify Insert mode, the information about the server to which the client connects is inserted in the header of the HTTP response from the server as a cookie. The cookie is named BIGipServer<pool_name>, and it includes the address and port of the server handling the connection. The expiration date for the cookie is set based on the timeout configured on the BIG-IP.

To activate Insert mode using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up Insert mode.
    The properties screen opens.
  3. Click the Persistence tab.
    The Persistence screen opens.
  4. Click the Active HTTP Cookie button.
  5. Select Insert mode from the Method list.
  6. Type the timeout value in days, hours, minutes, and seconds. This value determines how long the cookie lives on the client computer before it expires.
  7. Click the Apply button.

To activate Insert mode from the command line

To activate Insert mode from the command line, use the following syntax:

b pool <pool_name> { <lb_mode_specification> \

persist_mode cookie \

cookie_mode insert \

cookie_expiration <timeout> \

<member definition> }

The <timeout> value for the cookie is written using the following format:

<days>d hh:mm:ss

Rewrite mode

If you specify Rewrite mode, the BIG-IP intercepts a Set-Cookie, named BIGipCookie, sent from the server to the client and overwrites the name and value of the cookie. The new cookie is named BIGipServer <pool_name> and it includes the address and port of the server handling the connection.

Rewrite mode requires you to set up the cookie created by the server. In order for Rewrite mode to work, there needs to be a blank cookie coming from the web server for the BIG-IP to rewrite. With Apache variants, the cookie can be added to every web page header by adding an entry in the httpd.conf file:

Header add Set-Cookie BIGipCookie=0000000000000000000000000...

(The cookie must contain a total of 120 zeros.)

Note: For backward compatibility, the blank cookie may contain only 75 zeros. However, cookies of this size do not allow you to use rules and persistence together.

To activate Rewrite mode cookie persistence using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up Rewrite mode.
    The properties screen for the pool you clicked opens.
  3. Click the Persistence tab.
    The Persistence screen opens.
  4. Click the Active HTTP Cookie button.
  5. Select Rewrite mode from the Method list.
  6. Type the timeout value in days, hours, minutes, and seconds. This value determines how long the cookie lives on the client computer before it expires.
  7. Click the Apply button.

To activate Rewrite mode cookie persistence from the command line

To activate Rewrite mode from the command line, use the following syntax:

b pool <pool_name> { \

<lb_mode_specification> \

persist_mode cookie \

cookie_mode rewrite \

cookie_expiration <timeout> \

<member definition> }

The <timeout> value for the cookie is written using the following format:

<days>d hh:mm:ss

Passive mode

If you specify Passive mode, the BIG-IP does not insert or search for blank Set-Cookies in the response from the server. It does not try to set up the cookie. In this mode, the server provides the cookie formatted with the correct node information and timeout.

In order for Passive mode to work, there needs to be a cookie coming from the web server with the appropriate node information in the cookie. With Apache variants, the cookie can be added to every web page header by adding an entry in the httpd.conf file:

Header add Set-Cookie: "BIGipServer my_pool=184658624.20480.000; expires=Sat, 19-Aug-2002 19:35:45 GMT; path=/"

In this example, my_pool is the name of the pool that contains the server node, 184658624 is the encoded node address and 20480 is the encoded port. You can generate a cookie string with encoding automatically added using the bigpipe makecookie command:

b makecookie <server_address:service> [ > <file> ]

The command above prints a cookie template, similar to the following two examples below, to the screen or the redirect file specified.

Set-Cookie:BIGipServer[poolname]=336268299.20480.0000; path=/

Set-Cookie:BIGipServer[poolname]=336268299.20480.0000; expires=Sat, 01-Jan-2002 00:00:00 GMT; path=/

To create your cookie from this string, type the actual pool names and the desired expiration date and time.

Alternatively, you can perform the encoding using the following equation for address (a.b.c.d):

d*(256^3) + c*(256^2) + b*256 +a

The way to encode the port is to take the two bytes that store the port and reverse them. So, port 80 becomes 80 * 256 + 0 = 20480. Port 1433 (instead of 5 * 256 + 153) becomes 153 * 256 + 5 = 39173.

To activate Passive mode cookie persistence using the Configuration utility

After you set up the cookie created by the web server, you must activate Passive mode on the BIG-IP.

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up Passive mode.
    The properties screen for the pool you clicked opens.
  3. Click the Persistence tab.
    The Persistence screen opens.
  4. Select Passive HTTP Cookie mode.
  5. Click the Apply button.

To activate Passive mode cookie persistence from the command line

After you set up the cookie created by the web server, you must activate Passive mode on the BIG-IP. To activate HTTP cookie persistence from the command line, use the following syntax:

b pool <pool_name> { \

<lb_mode_specification> \

persist_mode cookie \

cookie_mode passive \

<member definition> }

Note: The <timeout> value is not used in Passive mode.

Hash mode

If you specify Hash mode, the hash mode consistently maps a cookie value to a specific node. When the client returns to the site, the BIG-IP uses the cookie information to return the client to a given node. With this mode, the web server must generate the cookie. The BIG-IP does not create the cookie automatically as it does with Insert mode.

To configure the cookie persistence hash option using the Configuration utility

Before you follow this procedure, you must configure at least one pool.

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up hash mode persistence.
    The properties screen for the pool you clicked opens.
  3. Click the Persistence tab.
    The Persistence screen opens.
  4. Click the Cookie Hash button.
    Set the following values (see the following Table 4.5 for more information):

    • Cookie Name
      Type the name of an HTTP cookie being set by the Web site. This could be something like Apache or SSLSESSIONID. The name depends on the type of web server your site is running.
    • Hash Values
      The Offset is the number of bytes in the cookie to skip before calculating the hash value. The Length is the number of bytes to use when calculating the hash value.
  5. Click the Apply button.

To configure the hash cookie persistence option from the command line

Use the following syntax to configure the hash cookie persistence option:

b pool <pool_name> { \

<lb_mode_specification> \

persist_mode cookie \

cookie_mode hash \

cookie_name <cookie_name> \

cookie_hash_offset <cookie_value_offset> \

cookie_hash_length <cookie_value_length> \

<member definition> }

The <cookie_name>, <cookie_value_offset>, and <cookie_value_length> values are described in Table 4.5.

The cookie hash mode values

Hash mode values

Description

<cookie_name>

This is the name of an HTTP cookie being set by a Web site.

<cookie_value_offset>

This is the number of bytes in the cookie to skip before calculating the hash value.

<cookie_value_length>

This is the number of bytes to use when calculating the hash value.

SSL persistence

SSL persistence is a type of persistence that tracks SSL connections using the SSL session ID, and it is a property of each individual pool. Using SSL persistence can be particularly important if your clients typically have translated IP addresses or dynamic IP addresses, such as those that Internet service providers typically assign. Even when the client's IP address changes, the BIG-IP still recognizes the connection as being persistent based on the session ID.

You may want to use SSL persistence and simple persistence together. In situations where an SSL session ID times out, or where a returning client does not provide a session ID, you may want the BIG-IP to direct the client to the original node based on the client's IP address. As long as the client's simple persistence record has not timed out, the BIG-IP can successfully return the client to the appropriate node.

You can set up SSL persistence from the command line or using the Configuration utility. To set up SSL persistence, you need to do two things:

  • Turn SSL persistence on.
  • Set the SSL session ID timeout, which determines how long the BIG-IP stores a given SSL session ID before removing it from the system.

Note: Do not enable SSL persistence on pools that load balance plain-text traffic, that is, traffic resulting from SSL proxies on which SSL termination is enabled.

To activate SSL persistence from the command line

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. Click the appropriate pool in the list.
    The Pool Properties screen opens.
  3. Click the Persistence tab.
    The Persistence screen opens.
  4. Click the SSL button.
  5. In the Timeout box, type the number of seconds that the BIG-IP should store SSL session IDs before removing them from the system.
  6. Click the Apply button.

To activate SSL persistence from the command line

Use the following syntax to activate SSL persistence from the command line:

b pool <pool_name> modify { persist_mode ssl ssl_timeout <timeout> }

For example, if you want to set SSL persistence on the pool my_pool, type the following command:

b pool my_pool modify { persist_mode ssl ssl_timeout 3600 }

To display persistence information for a pool

To show the persistence configuration for the pool:

b pool <pool_name> persist show

To display all persistence information for the pool named my_pool, use the show option:

b pool my_pool persist show

SIP Call-ID persistence

Session Initiation Protocol (SIP) is an application-layer protocol that manages sessions consisting of multiple participants, thus enabling real-time messaging, voice, data, and video. With SIP, applications can communicate with one another by exchanging messages through TCP or UDP. Examples of such applications are internet conferencing and telephony, or multimedia distribution.

SIP Call-ID persistence is a new type of persistence available for server pools. You can configure Call-ID persistence for proxy servers that receive Session Initiation Protocol (SIP) messages sent through UDP.

Note: BIG-IP currently supports persistence for SIP messages sent through UDP only.

When activating SIP Call-ID persistence for a server pool, you can specify the following:

  • The name of the server pool (required)
  • A timeout value for persistence records (optional)

    This timeout value allows the BIG-IP to free up resources associated with old SIP persistence entries, without having to test each inbound packet for one of the different types of SIP final messages. A default timeout value exists, which is usually 32 seconds. This timeout value is the window of time that a stateful proxy maintains state. If you change the timeout value, we recommend that the value be no lower than the default.

To activate SIP Call-ID persistence, you can use either the Configuration utility or the bigpipe pool command.

To activate SIP persistence using the Configuration utility

  1. Start the Configuration utility.
  2. In the Navigation pane, click Pools.
    The Pools screen opens.
  3. Select a pool name.
  4. Click the Persistence tab.
  5. Click the button for SIP persistence.
  6. Click the Apply button.

To activate SIP persistence from the command line

Use the following syntax to activate SIP Call-ID persistence from the command line.

bigpipe pool <pool name> { persist_mode sip [sip_timeout <timeout>] }

To display the contents of the hash table

To display the contents of the SIP persistence hash table, use the bigpipe command as follows:

bigpipe sip dump

Destination address affinity (sticky persistence)

You can optimize your proxy server array with destination address affinity (also called sticky persistence). Destination address affinity directs requests for a certain destination IP address to the same proxy server, regardless of which client the request comes from.

This enhancement provides the most benefits when load balancing caching proxy servers. A caching proxy server intercepts web requests and returns a cached web page if it is available. In order to improve the efficiency of the cache on these proxies, it is necessary to send similar requests to the same proxy server repeatedly. Destination address affinity can be used to cache a given web page on one proxy server instead of on every proxy server in an array. This saves the other proxies from having to duplicate the web page in their cache, wasting memory.

To activate destination address affinity using the Configuration utility

You can only activate destination address affinity on pools directly or indirectly referenced by wildcard virtual servers. For information on setting up a wildcard virtual server, see the Wildcard virtual servers, on page 4-71. Follow these steps to configure destination address affinity:

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the Pools list, click the pool for which you want to set up destination address affinity.
    The properties screen for the pool you clicked opens.
  3. Click the Persistence tab.
    The Persistence screen opens.
  4. Click the Destination Address Affinity button to enable destination address affinity.
  5. In the Mask box, type in the mask you want to apply to sticky persistence entries.
  6. Click the Apply button.

To activate destination address affinity from the command line

Use the following command to activate sticky persistence for a pool:

b pool <pool_name> modify { persist_mode sticky sticky_mask <ip address> }

Use the following command to delete sticky entries for the specified pool:

b pool <pool_name> sticky clear

To show the persistence configuration for the pool:

b pool <pool_name> persist show

WTS persistence

Windows Terminal Server (WTS) is a Windows feature that allows a Windows 2000 client to run Windows remotely from a Windows.NET.Server system over a TCP connection. To track and load balance connections between WTS dient and server systems, BIG-IP offers a type of persistence known as WTS persistence.

To activate WTS persistence, you must enable it as a BIG-IP pool attribute. You must also configure a WTS server, in either of two modes: hash mode or passive mode.

Note: For information on how to configure a WTS server for BIG-IP WTS persistence, see your Windows.NET.Server product documentation.

Hash mode

When configured in hash mode, a WTS server does not participate in a session directory; that is, the server cannot share sessions with other WTS servers. Hash mode ensures that WTS clients provide data to the BIG-IP to allow the BIG-IP to consistently connect that client to the same WTS server. If the client data that BIG-IP requires is not provided, BIG-IP load balances the connection according to the way that the user has configured BIG-IP for load balancing.

Passive mode

When configured in passive mode, a WTS server participates in a session directory, that is, the server can share sessions with other WTS servers. Passive mode provides a way for BIG-IP to maintain persistent connections when WTS servers are clustered together. Normally, WTS servers, when participating in a session directory, map WTS clients to their appropriate servers. If a client connects to the wrong server in the cluster, the targeted server checks its client-server mapping and performs a rewrite to the correct server.

When configured in passive mode, however, a WTS server always rewrites the connection to the same BIG-IP virtual server, instead of to the correct WTS server directly. The BIG-IP then sends the connection to the correct WTS server.

Activating WTS persistence on BIG-IP

To activate WTS persistence on BIG-IP, you must perform three tasks:

  • Enable TCP service 3389
  • Create a pool containing all cluster members and specifying the msrdp persistence type
  • Create a virtual server that uses the pool

To enable TCP service 3389 from the command line

To enable TCP service 3389, use the following command:

b service 3389 tcp enable

To create a pool with the WTS persistence attribute from the command line

To create a pool that is configured for WTS persistence and that contains two members of a WTS cluster, use the bigpipe pool command as in the following example:

b pool my_cluster_pool persist_mode msrdp { member 11.12.1.101:3389 member 11.12.1.100:3389 }

To create a virtual server from the command line

To create a virtual server that uses the pool my_cluster_pool, use the bigpipe virtual command as in the following example:

b virtual 192.200.100.25:3389 use pool my_cluster_pool

Maintaining persistence across virtual servers that use the same virtual addresses

When this mode is turned on, the BIG-IP attempts to send all persistent connection requests received from the same client, within the persistence time limit, to the same node only when the virtual server hosting the connection has the same virtual address as the virtual server hosting the initial persistent connection. Connection requests from the client that go to other virtual servers with different virtual addresses, or those connection requests that do not use persistence, are load balanced according to the load balancing mode defined for the pool.

If a BIG-IP configuration includes the following virtual server mappings, where the virtual server v1:http references the http_pool (contains the nodes n1:http and n2:http) and the virtual server v1:ssl references the pool ssl_pool (contains the nodes n1:ssl and n2:ssl). Each virtual server uses persistence:

b virtual v1:http use pool http_pool

b virtual v1:ssl use pool ssl_pool

b virtual v2:ssl use pool ssl_pool

However, if the client subsequently connects to v1:ssl, the BIG-IP uses the persistence session established with the first connection to determine the node that should receive the connection request, rather than the load balancing mode. The BIG-IP should send the third connection request to n1:ssl, which uses the same node address as the n1:http node that currently hosts the client's first connection with which it shares a persistent session.

For example, a client makes an initial connection to v1:http and the load balancing mechanism assigned to the pool http_pool chooses n1:http as the node. If the same client then connects to v2:ssl, the BIG-IP starts tracking a new persistence session, and it uses the load balancing mode to determine which node should receive the connection request because the requested virtual server uses a different virtual address (v2) than the virtual server hosting the first persistent connection request (v1). In order for this mode to be effective, virtual servers that use the same virtual address, as well as those that use TCP or SSL persistence, should include the same node addresses in the virtual server mappings.

To activate persistence for virtual servers that use the same address using the Configuration utility

  1. In the navigation pane, click System.
    The Network Map screen opens.
  2. Click the Advanced Properties tab.
    The BIG-IP System Control Variables screen opens.
  3. Click the Allow Persistence Across All Ports for Each Virtual Address check box. (To disable this persistence mode, clear the check box).
  4. Click the Apply button.

To activate persistence for virtual servers that use the same address from the command line

The global variable persist_across_services turns this mode on and off. To activate the persistence mode, type:

b global persist_across_services enable

To deactivate the persistence mode, type:

b global persist_across_services disable

Maintaining persistence across all virtual servers

You can set the BIG-IP to maintain persistence for all connections requested by the same client, regardless of which virtual server hosts each individual connection initiated by the client. When this mode is turned on, the BIG-IP attempts to send all persistent connection requests received from the same client, within the persistence time limit, to the same node. Connection requests from the client that do not use persistence are load balanced according to the currently selected load balancing mode.

The following examples show virtual server mappings, where the virtual servers v1:http and v2:http reference the http1_pool and http2_pool (both pools contain the nodes n1:http and n2:http) and the virtual servers v1:ssl and v2:ssl reference the pools ssl1_pool and ssl2_pool (both pools contain the nodes n1:ssl and n2:ssl). Each virtual server uses persistence:

b virtual v1:http use pool http1_pool

b virtual v1:ssl use pool ssl1_pool

b virtual v2:http use pool http2_pool

b virtual v2:ssl use pool ssl2_pool

Say that a client makes an initial connection to v1:http and the BIG-IP load balancing mechanism chooses n1:http as the node. If the same client subsequently connects to v1:ssl, the BIG-IP would send the client's request to n1:ssl, which uses the same node address as the n1:http node that currently hosts the client's initial connection. What makes this mode different from maintaining persistence across virtual servers that use the same virtual address is that if the same client subsequently connects to v2:ssl, the BIG-IP would send the client's request to n1:ssl, which uses the same node address as the n1:http node that currently hosts the client's initial connection.

Warning: In order for this mode to be effective, virtual servers that use pools with TCP or SSL persistence should include the same member addresses in the virtual server mappings.

To activate persistence across all virtual servers using the Configuration utility

  1. In the navigation pane, click the System icon.
    The Network Map screen opens.
  2. Click the Advanced Properties tab.
    The BIG-IP System Control Variables screen opens.
  3. Click the Allow Persistence Across All Virtual Servers check box to activate this persistence mode.
  4. Click the Apply button.

To activate persistence across all virtual servers from the command line

The global variable persist_across_virtuals turns this mode on and off. To activate the persistence mode, type:

b global persist_across_virtuals enable

To deactivate the persistence mode, type:

b global persist_across_virtuals disable

HTTP redirection

Another attribute of a pool is HTTP redirection. HTTP redirection allows you to configure a pool so that HTTP traffic is redirected to another protocol identifier, host name, port number, or URI path. For example, if all members of a pool are unavailable (that is, the members are disabled, marked down, and have exceeded their connection limit), the HTTP request can be redirected to the fallback host, with the HTTP reply Status Code 302 Found.

When configuring a pool to redirect HTTP traffic to a fallback host, you can use an IP address or a fully-qualified domain name (FQDN), or you can use a special format string included in the BIG-IP. These format strings can also be used for specifying protocol identifiers, ports, and URIs.

The following two sections describe these two ways of redirecting HTTP requests. Following these two sections is a description of a related feature, which allows you to configure a server to rewrite the specified HTTP redirection.

Using IP addresses and Fully Qualified Domain Names

When redirecting traffic to a fallback host, you can specify the fallback host as an IP address or as a fully qualified domain name (FQDN). In either case, it may include a port number. The example in Figure 4.7 redirects the request to http://redirector.sam.com.

Figure 4.7 Fallback host in a pool

 pool my_pool {
member 10.12.10.1:80
member 10.12.10.2:80
member 10.12.10.3:80
fallback redirector.sam.com

}

Note: The HTTP redirect mechanism is not a load balancing method. The redirect URL may be a virtual server pointing to the requested HTTP content, but this is not implicit in its use.

Table 4.6 shows how different fallback host specifications are resolved

How the fallback host specifications are resolved

Requested URL

Fallback Host Specification

Redirect URL

http://www.sam.com/

fallback.sam.com

http://falback.sam.com/

http://www.sam.com/

fallback.sam.com:8002

http://fallback.sam.com:8002/

http://www.sam.com:8001

fallback.sam.com

http://fallback.sam.com/

http://www.sam.com:8001/

fallback.sam.com:8002

http://fallback.sam.com:8002/

http://www.sam.com/sales

fallback.sam.com

http://fallback.sam.com/sales

http://192.168.101.3/

fallback.sam.com

http://fallback.sam.com/

http://192.168.101.3/sales

fallback.sam.com

http://fallback.sam.com/sales

http://www.sam.com/sale

192.168.101.5

http://192.168.101.5/sales

http://192.168.101.3/sales/default.asp?q=6

fallback.sam.com

http://fallback.sam.com/sales/default.asp?q=6

Using format strings (expansion characters)

To allow HTTP redirection to be fully configurable with respect to target URI, the following format strings are available. These strings can be used within both pools and rules. (For more information on using HTTP redirection format strings within rules, see Pool selection based on HTTP header data, on page 4-56.)

Table 4.7 lists and defines the format strings that you can use to specify HTTP redirection.

Format strings for HTTP redirection

Format String

Description

%h

host name, as obtained from the Host: header of the client

%p

Port, from the virtual server listening port

%u

URI path, as obtained from a GET/POST request

An example of a fallback host string is https://%h/sample.html. In this string, specifying https as the protocol identifier causes the traffic to be redirected to that protocol instead of the standard http protocol. Also, the string sample.html causes the traffic to be redirected to that URI instead of to the standard URI specified in the HTTP header, which would normally be represented in the fallback string as %u.

Table 4.8 shows some sample redirection specifications, their explanations, and their resulting redirection.

Sample HTTP redirections using format strings

Redirection string

Explanation

Resulting redirection

%h:%p/%u

No redirection (preserve host name, port, and path)

http://www/example.com:8080/sample

%h/unavailable

change path, remove port

http://www/example.com/unavailable

https://%h/unavailable

Specify https as protocol, remove port, change path

https://www/example.com/unavailable

www.sample.com:8080/%u

Change host name and port, preserve path

http://www.sample.com:8080/sample

https://1.2.3.4:443/%u/unavilable.html

Specify https as protocol, change host name, port, and path

https://1.2.3.4:443/sample/unavailable.html

ftp://1.2.3.4:%p/unavailable/%u

Specify ftp as protocol, change host name and path

ftp://1.2.3.4:8080/unavailable/sample

rtsp://%h:554/streamingmedia/%u

Specify rtsp as protocol, change port and path

rtsp://www.example.com:554/streamingmedia/sample

The example in Figure 4.8 shows a pool configured to redirect an HTTP request to a different protocol (https) host name (1.2.3.4), port number (443), and path (unavailable.html).

Figure 4.8 HTTP redirection specified in a pool

  pool my_pool {
member 10.12.10.1:80
member 10.12.10.2:80
member 10.12.10.3:80

https://1.2.3.4:443/%u/unavailable.html
}

Rewriting HTTP redirection

Sometimes, a client request is redirected from the HTTPS protocol to the HTTP protocol, which is a non-secure channel. If you want to ensure that the request remains on a secure channel, you can cause that redirection to be rewritten so that it is redirected back to the HTTPS protocol. Also, through the rewriting of redirections, you can rewrite a port number or a URI path.

You can rewrite HTTP redirections in either of two ways:

  • You can create an SSL Accelerator proxy and configure the rewriting of HTTP redirections as a proxy option. For more information, see Rewriting HTTP redirection, on page 4-105.
  • If your web server is an IIS server, you can configure that server, instead of your SSL proxy, to rewrite any HTTP redirections. Part of this IIS server configuration includes the installation of a special BIG-IP filter, redirectfilter.dll, on the IIS server. The following section provides this IIS configuration procedure.

To install the filter for rewriting HTTP redirection

To install the ISAPI filter (redirectfilter.dll) for use with a Microsoft Internet Information Server (IIS) version 4.0 or 5.0, follow these steps:

  1. Copy the filter DLL to an appropriate folder, such as the SCRIPTS or CGI-BIN subdirectory.
  2. Open the Internet Service Manager (MMC).
  3. Select the appropriate level for the ISAPI filter:
  4. If you intend to use the ISAPI filter with all Web sites, select the ServerName icon.
  5. If you intend to use the ISAPI filter with a specific Web site, select the icon for that Web site (for example, the default Web site).
  6. Right-click the level (icon) that you selected.
  7. Click the ISAPI Filters tab.

    Note: To configure an ISAPI filter for all Web sites, first click the Edit button that is next to the Master Properties of the WWW Service.

  8. Click Add.
  9. Type a name for the ISAPI filter.
  10. Click Browse and select the ISAPI filter that you copied in step 1.
  11. Click OK.
  12. Stop the IISADMIN service. To do this, either type net stop iisadmin /y at a command prompt, or use the Services applet that is located in Control Panel (in Windows NT 4.0) or Administrative Tools (in Windows 2000).
  13. Start the World Wide Web Publishing Service by typing net start w3svc at a command prompt, or by using the Services applet that is located in Control Panel (in Windows NT 4.0) or Administrative Tools (in Windows 2000).
  14. Repeat the previous step for any other services that were stopped in step 11.
  15. Browse back to the ISAPI Filters tab (by following steps 1-5) and verify that the filter is loaded properly. You should see a green arrow that is pointing up under the Status column.

Note: The ISAPI Filters tab specifies a load order, with the filter at the top of the list loading first. Normally Sspifilt.dll, the ISAPI filter for SSL, is at the top of the list to allow any other filters to access data before IIS encrypts and transmits or decrypts and receives TTPS traffic.

HTTP header insertion

An optional attribute of a pool is HTTP header insertion. Using this attribute, you can configure a pool to insert a header into an HTTP request. The HTTP header being inserted can include a client IP address. Including a client IP address in an HTTP header is useful when a connection goes through a secure network address translation (SNAT) and you need to preserve the original client IP address.

The header insertion must be specified in the pool definition as a quoted string. Figure 4.9 shows the required syntax.

 pool <pool_name> {
header insert <quoted string>
}

Figure 4.9 Syntax of a header insertion string within a pool

Optionally, you can include rule variables in the quoted string. For example, the pool definition shown in 4.10 uses the rule variable client_addr to represent the original client IP address of an HTTP request.

 pool my_pool {
header insert "OrigClientAddr:${client_addr}"
member 10.0.0.1:80
member 10.0.0.2:80
member 10.0.0.3:80
}

Figure 4.10 Example of a rule variable within a pool for header insertion

The rule variables that can be used for header insertion are:

  • client_addr
  • client_port
  • server_addr
  • server_port
  • link_qos
  • ip_qos

    Figure 4.11 shows a pool that inserts a header, using all of the above rule variables.

 pool my_pool {
header insert "ClientSide:${client_addr}:${client_port} ->
${server_addr}:${server_port} tos=${ip_tos} qos=${link_qos}"
member 10.0.0.1:80
member 10.0.0.2:80
member 10.0.0.3:80
}

Figure 4.11 Pool with header insertion string using multiple rule variables

The above header insertion string inserts a header such as that shown in Figure 4.12 into an HTTP request:

 GET /index.html HTTP/1.0
ClientSide: 10.0.0.1:3340 -> 10.0.0.101:80 tos=16 qos=0
Host: www.yahoo.com
Connection: Keep-Alive

Figure 4.12 Header resulting from a header insertion string within a pool

Note: If the rule variable specified is not a valid variable, the invalid variable name is inserted directly into the HTTP request, with no substitution.

In addition to inserting a client IP address into an HTTP request, you can configure an SSL Accelerator proxy to insert other types of headers into HTTP requests. Examples of headers that an SSL proxy can insert are: information on client certificates, cipher specifications, and client session IDs.

For more information on rule variables and on configuring an SSL proxy to insert headers into HTTP requests, see Rule-based pool selection, on page 4-50 and Inserting headers into HTTP requests, on page 4-95.

Quality of Service (QoS) level

Another attribute of a pool is the Quality of Service (QoS) level. The QoS level is a means by which network equipment can identify and treat traffic differently based on an identifier. As traffic enters the site, the BIG-IP can set the QoS level on a packet, based on the QoS level defined in the pool to which the packet is sent. The BIG-IP can also apply a rule that sends the traffic to different pools of servers based on the Quality of Service level.

The BIG-IP can tag outbound traffic (the return packets based on an HTTP GET) based on the QoS value set in the pool. That value is then inspected by upstream devices and given appropriate priority. Based on a rule, the BIG-IP can examine incoming traffic to see if it has a particular QoS or ToS tag in the header. The BIG-IP can then make a rule-based load balancing decision based on that tag.

Figure 4.13 shows how to configure a pool so that a QoS level is set for a packet sent to that pool. In this example, the QoS tag, represented by the link_qos variable, is set to 3 when sending packets to the client, and set to 4 when packets are sent to the server.

 pool http_pool {
link_qos to client 3
link_qos to server 4
}

Figure 4.13 Example of a pool that sets the QoS level on a packet

In addition to configuring a pool to set the QoS level on a packet, you can configure a rule that selects a pool based on the existing QoS value within the packet. For more information, see Quality of Service (QoS) level, on page 4-53.

Type of Service (ToS) level

Another attribute of a pool is the Type of Service (ToS) level. The ToS level, also known as the DiffServ value, is a means by which network equipment can identify and treat traffic differently based on an identifier. As traffic enters the site, the BIG-IP can set the ToS level on a packet, based on the ToS level defined in the pool to which the packet is sent. The BIG-IP can also apply a rule and send the traffic to different pools of servers based on the ToS level.

The BIG-IP can tag outbound traffic (the return packets based on an HTTP GET) based on the ToS value set in the pool. That value is then inspected by upstream devices and given appropriate priority. Based on a rule, the BIG-IP can examine incoming traffic to see if it has a particular ToS tag in the header. The BIG-IP can then make a rule-based load balancing decision based on that tag.

Figure 4.14 shows how to configure a pool so that a ToS level is set for a packet sent to that pool. In this example, the ToS tag, represented by the ip_tos variable, is set to 16 when sending packets to the client, and set to 16 when packets are sent to the server.

 pool http_pool {
ip_tos to client 16
ip_tos to server 16
}

Figure 4.14 Example of a pool that sets the ToS level on a packet

In addition to configuring a pool that sets the ToS level on a packet, you can configure a rule that selects a pool based on the existing ToS value within the packet. For more information, see IP Type-Of-Service (ToS) level, on page 4-53.

Disabling SNAT and NAT connections

When configuring a pool, you can specifically disable any secure network address translations (SNATs) or network address translations (NATs) for any connections that use that pool. By default, this setting is enabled.

For general information on SNATs and NATs, see Address translation: SNATs, NATs, and IP forwarding, on page 4-121.

The example in Figure 4.15 shows the syntax for disabling SNAT and NAT translation for any connections that use the pool my_pool.

 pool my_pool {     
snat disable
nat disable
member 10.1.1.1:80
member 10.1.1.2:80
}

Figure 4.15 Disabling SNAT and NAT translations

To disable a SNAT or NAT connection for a pool using the Configuration utility

  1. In the navigation pane, click Pools.
  2. Click the Add button.
  3. Clear the Enable SNATs check box. (By default, this box is checked.)
  4. Click Done.

To disable a SNAT or NAT connection for a pool from the command line

b pool <pool_name> modify { snat disable }

One case in which you might want to configure a pool to disable SNAT or NAT connections is when you want the pool to disable SNAT or NAT connections for a specific service. In this case, you could create a separate pool to handle all connections for that service, and then configure the snat disable or nat disable attribute on that pool. The following section describes this procedure.

To disable SNAT connections that use a specific service

The following procedure creates an automapped SNAT for a VLAN, creates a pool that disables SNAT or NAT connections, and then directs a wildcard virtual server using port 162 to send connections to the newly-defined pool.

  1. Enable SNAT automapping on the self IP address for VLAN my_vlan. For example:

    b self 192.168.33.14 vlan my_vlan snat automap enable

  2. Create an automapped SNAT for the VLAN my_vlan. For example:

    b snat map my_vlan to auto

  3. Create a forwarding pool with the snat disable attribute defined. For example:

    b pool snat_disable_pool { snat disable forward }

    Note: For information on forwarding pools, see Forwarding pools, on page 4-47.

  4. Create a wildcard virtual server for the VLAN my_vlan, specifying port 162 and the pool snat_disable_pool. For example:

    b virtual my_vlan :162 use pool snat_disable_pool

    Figure 4.16 shows the resulting entries in the /config/bigip.conf file.

     # self IP addresses    
    self 192.168.33.14 {
    vlan my_vlan
    netmask 255.255.255.0
    broadcast 192.168.33.255
    snat automap enable

    }

    # server pools
    pool snat_disable_pool {
    snat disable
    forward

    }

    # virtual servers
    virtual servers:162 unit 1 {
    use pool snat_disable_pool
    translate addr disable

    Figure 4.16 Sample entries in the /config/bigip.conf file

    Figure 4.17 shows an example of a rule that sends SNAT connections to a pool that disables SNAT connections on a range of ports, defined in the class IP_Port_Range.

     # The snat_disable pool disables all SNAT connections.    
    if (client_port == one of IP_Port_Range {
    use ( snat_disable)
    }
    else {
    use ( other_pool)
    }

    # The IP_Port_Range class contains a list of two ports/services.

    class IP_Port_Range {
    161
    162
    }

    Figure 4.17 A rule that disables SNAT connections for a range of ports

Forwarding pools

A forwarding pool is a pool that specifies that a connection should be forwarded, using IP routing, instead of load balanced. In many cases, this eliminates the need to create a forwarding virtual server.

Forwarding pools are typically used with wildcard virtual servers or network virtual servers only. When you enable forwarding on a pool, you can apply any feature that can be configured on a pool to a forwarding connection.

A pool configured for forwarding has no members. Also, this type of pool cannot be the default gateway pool.

Figure 4.18 shows an example of a pool configured for forwarding.

 pool my_pool {     
link_qos to client 5
link_qos to server 5
forward
}

Figure 4.18 Example of a pool configured for forwarding

To configure a pool for forwarding using the Configuration utility

  1. In the navigation pane, click Pools.
  2. Click the Add button.
  3. Click the forwarding button. If you enable forwarding, you cannot enter a list of pool members.
  4. Click Done.

To configure a pool for forwarding from the command line

b pool <pool_name> { forward }

Note: If you want to enable IP forwarding for a virtual server or globally for the BIG-IP, see Forwarding virtual servers, on page 4-76 and IP forwarding, on page 4-134, respectively.

Rules

As described in the Pools section, a pool may be referenced directly by the virtual server, or indirectly through a rule, which chooses among two or more load balancing pools. In other words, a rule selects a pool for a virtual server. A rule is referenced by a 1- to 31-character name. When a packet arrives that is destined for a virtual server that does not match a current connection, the BIG-IP can select a pool by evaluating a virtual server rule. The rule is configured to ask true or false questions such as:

  • HTTP header load-balancing: Does the packet data contain an HTTP request with a URI ending in cgi?
  • IP header load balancing: Does the source address of the packet begin with the octet 206?

    In addition to creating a rule to select a pool, you can also create a rule to redirect an HTTP request to a specific host name, port number, or URI path.

    The remainder of this section covers the following topics:

  • Rule-based pool selection - Describes how to use the pool-selection criteria to configure load balancing.
  • Rule-based HTTP redirection - Describes how to create a rule to redirect an HTTP request to a specific host name, port, or URI.
  • Rule Statements - Describes the various types of expressions and operands that you are allowed to use when constructing rules.
  • Configuring rules - Provides the procedure for configuring a rule, using either the Configuration utility or the command-line interface.
  • Additional rule examples - Provides additional examples showing how to configure rules for various types of pool selection.

    Note: Once you have created a rule, you need to configure a virtual server to reference that rule. For information on configuring a virtual server to reference a rule, see Configuring virtual servers that reference rules, on page 4-82.

Rule-based pool selection

Table 4.9 lists the various criteria you can use when creating a rule to select a pool.

The attributes you can configure for a rule

Pool-selection criteria

Description

Pool selection based on HTTP request data

You can send connections to a pool or pools based on HTTP header information you specify.

Pool selection based on IP packet header data

You can send connections to a pool or pools based on IP addresses, port numbers, IP protocol numbers, Quality of Service (Qos), and Type of Service (ToS) levels defined within a packet.

Pool selection based on one of operator

You can send connections to a pool or pools based on whether the destination address is a member of a specific named class, such as one of AOL.

Pool selection based on HTTP header data (Cache rule)

This type of rule is any rule that contains a cache statement. A cache rule selects a pool based on HTTP header data. You cannot use it with FTP.

The following sections describe specific rule statements that you can use to select pools for load balancing.

Note: You must define a pool before you can define a rule that references the pool.

Pool selection based on HTTP request data

A rule specifies what action the BIG-IP takes depending on whether a question is answered true or false. A rule may either select a pool or ask another question. For example, you may want a rule that logically states: "If the packet data contains an HTTP request with a URI ending in cgi, then load balance using the pool cgi_pool. Otherwise, load balance using the pool default_pool".

Figure 4.19 shows a rule with an HTTP request variable that illustrates this example.

Figure 4.19 A rule based on an HTTP header variable

 rule cgi_rule {    
if (http_uri ends_with "cgi") {
use ( cgi_pool )
}
else {
use ( default_pool )
}
}

Rules normally run right after the BIG-IP receives a packet that does not match a current connection. However, in the case of an HTTP request, the first packet is a TCP SYN packet that does not contain the HTTP request. In this case, the BIG-IP proxies the TCP handshake with the client and begins evaluating the rule again when the packet containing the HTTP request is received. When a pool has been selected and a server node selected, the BIG-IP proxies the TCP handshake with the server node and then passes traffic normally.

For examples of rules that select pools based on header information inserted into HTTP requests by an SSL Accelerator proxy, see Inserting headers into HTTP requests, on page 4-95.

Pool selection based on IP packet header data

In addition to specifying the HTTP variables within a rule, you can also select a pool by specifying IP packet header information within a rule. The types of information you can specify in a rule are:

  • Client IP address
  • Server IP address
  • Client port number
  • Server port number
  • IP protocol number
  • QoS level
  • ToS level

To specify IP packet header variables within a rule using the Configuration utility

  1. In the navigation pane, click Rules.
  2. Click the Add button.
  3. In the Name box, enter a unique name for the rule.
  4. In the Type box, click the button for Rule Builder.
  5. Click Next.
  6. Select a variable on the left side of the screen.
  7. Fill in all information within the row.
  8. Click Next.
  9. Select Discard.
  10. Click Next.
  11. Select No Action or Discard from the box.
  12. Click Done.

The following sections describe the specific types of IP packet header data that you can specify within a rule.

IP addresses

You can specify the client_addr or the server_addr variable within a rule to select a pool. For example, if you want to load balance based on part of the client's IP address, you might want a rule that states:

"All client requests with the first byte of their source address equal to 206 will load balance using a pool named clients_from_206 pool. All other requests will load balance using a pool named other_clients_pool."

Figure 4.20 shows a rule that implements the preceding statement.

 rule clients_from_206_rule {
if ( client_addr equals 206.0.0.0 netmask 255.0.0.0 ) { {
use ( clients_from_206 )
}
else {
use ( other_clients_pool )
}
}

Figure 4.20 A rule based on the client IP address variable

For additional examples of rules using IP packet header information, see Additional rule examples, on page 4-65.

Port numbers

BIG-IP includes rule variables that you can use to select a pool based on the port number of the client or server. These variables are client_port and server_port.

To configure a rule to select a pool based on a port number, use the syntax shown in the example in Figure 4.21.

 rule my_rule {
if (client_port > 1000) {
use (slow_pool)
}
else {
use (fast_pool)
}
}

Figure 4.21 A rule based on a TCP or UDP port number

IP protocol numbers

BIG-IP includes a rule variable, ip_protocol, that you can use to select a pool based on an IP protocol number.

To configure a rule to select a pool based on an IP protocol number, use the syntax shown in the example in Figure 4.22.

 rule my_rule {
if (ip_protocol == 6) {
use (tcp_pool)
}
else {
use (slow_pool)
}
}

Figure 4.22 A rule based on an IP protocol number

Quality of Service (QoS) level

The Quality of Service (QoS) standard is a means by which network equipment can identify and treat traffic differently based on an identifier. As traffic enters the site, the BIG-IP can apply a rule that sends the traffic to different pools of servers based on the QoS level within a packet.

To configure a rule to select a pool based on the QoS level of a packet, you can use the link_qos rule variable, as shown in the example in Figure 4.23.

 rule my_rule {
if (link_qos > 2) {
use (fast_pool)
} else {
use (slow_pool)
}
}

Figure 4.23 A rule based on a Quality of Service (QoS) level

For information on setting QoS values on packets based on the pool selected for that packet, see Quality of Service (QoS) level, on page 4-44.

IP Type-Of-Service (ToS) level

The Type of Service (ToS) standard is a means by which network equipment can identify and treat traffic differently based on an identifier. As traffic enters the site, the BIG-IP can apply a rule that sends the traffic to different pools of servers based on the ToS level within a packet.

The variable that you use to set the ToS level on a packet is ip_tos. This variable is sometimes referred to as the DiffServ variable.

To configure a rule to select a pool based on the ToS level of a packet, you can use the ip_tos rule variable, as shown in the example in Figure 4.24.

 rule my_rule {
if (ip_tos == 16) {
use (telnet_pool)
}
else {
use (slow_pool)
}
}

Figure 4.24 A rule based on a Type of Service (ToS) level

For information on setting ToS values on packets based on the pool selected for that packet, see Type of Service (ToS) level, on page 4-44.

Pool selection based on one of operator

BIG-IP includes a rule operator that you can use to select a pool based on whether the variable being used in the rule represents a member of a specific class.

Example

For example, prior to the availability of the one of operator, a rule that was intended to send incoming AOL connections to the pool aol_pool was written as shown in Figure 4.25, where multiple values for the client_addr variable had to be individually specified.

 rule my_rule {
if ( client_addr equals 152.163.128.0 netmask 255.255.128.0

or client_addr equals 195.93.0.0 netmask 255.255.254.0
or client_addr equals 205.188.128.0 netmask
255.255.128.0 ) {
use (aol_pool)
}
else {
use (all_pool)
}
}

Figure 4.25 Example of a rule without the one of operator

Using the one of operator instead, you can cause BIG-IP to load balance all incoming AOL connections to the pool aol_pool, if the value of the client_addr variable is a member of the class AOL. Figure 4.26 shows this type of rule. In this case, the one of operator indicates that the variable client_addr is actually a list of values (that is, a class).

 rule my_rule {
if (client_addr equals one of aol) {
use (aol_pool)
}
else {
use (all_pool)
}
}

Figure 4.26 A rule based on the one of operator

Note that an expression such as client_addr equals one of aol is true if the expression is true with at least one specific value in the class.

For another example of a rule using the one of operator, see Additional rule examples, on page 4-65.

Class types

The one of operator for rules supports three specific types of classes. They are:

  • Strings - The following command creates a string class:

    b class my_class { \".gif\" }

    Note: This example shows the use of escape characters for the quotation marks.

    Figure 4.27 shows the resulting string type of class.

     class images {
    ".gif"
    }

    Figure 4.27 An example of a string type of class

  • Numerics - The following command creates a numeric type of class:

    b class my_protos { 27 38 93 }

    Figure 4.28 shows the resulting numeric type of class:

     class my_protos {
    27
    38
    93
    }

    Figure 4.28 An example of a numeric type of class

  • IP addresses - The following command creates a class containing IP addresses:

    b class my_ntwk { network 10.2.2.0 mask 255.255.255.0 }

    Figure 4.29 shows the resulting IP address type of class:

     class my_netwk {
    network 10.2.2.0 mask 255.255.255.0
    }

    Figure 4.29 An example of an IP address type of class

    Note: The size of a class is limited by system resources only.

Predefined lists

In addition to the one of operator, the BIG-IP includes a number of predefined lists for you to use with this operator. They are:

  • AOL Network
  • Image Extensions
  • Non-routable addresses

    These lists are located in the file /etc/default_classes.txt. When the bigpipe load command is issued, the lists are loaded. Unless modified by a user, these lists are not saved to the file bigip.conf.

To view classes

To view classes, including the default classes, use the following command.

bigpipe class show

Pool selection based on HTTP header data

A rule can contain a cache statement that selects a pool based on HTTP header data. A cache statement returns either the origin pool, the hot pool, or the cache pool. When the cache pool is selected, it is accompanied by the indicated node address and port. When a rule returns both a pool and a node, the BIG-IP does not do any additional load balancing or persistence processing.

Figure 4.30 shows an example of a rule containing a cache statement

Figure 4.30 An example of a cache load balancing rule

 rule my_rule {    
if ( http_host starts_with "dogfood" ) {
cache ( http_uri ends_with "html" or http_uri ends_with "gif" ) {
origin_pool origin_server
cache_pool cache_servers
hot_pool cache_servers
hot_threshold 100
cool_threshold 10
hit_period 60
content_hash_size 1024
}
}
else {
use ( cathost named_servers )
}
}

For a complete list of cache statement syntax, see Configuring a remote origin server, on page 4-64.

Rule-based HTTP redirection

In addition to configuring a rule to select a specific pool, you can also configure a rule to redirect an HTTP request to a specific location, using the redirect to operator and a set of format strings included in the BIG-IP. The location can be either a host name, port number, or URI. The format strings are: %h, %p, and %u. These format strings can be used within a redirection string to indicate which parts of the string (host name, port number, and URI path) do not indicate a redirection.

For example, the string https://%h:443/%u specifies that the HTTP request is to be redirected to a different protocol (https instead of the standard http) and a different port number (443). The host name and the URI path remain the same, indicated by the %h and %u format strings.

Figure 4.31 shows a rule that is configured to redirect an HTTP request.

 rule my_rule {
if (http_uri ends_with "baz") {
redirect to "https://%h:8080/%u/"
}
else {
use (web_pool)
}
}

Figure 4.31 A rule based on HTTP redirection

The preceding rule applies the format string to the URL. In this case, the format string sets the protocol to https, strips the requested port number (if any), and changes it to 8080, and applies a trailing slash (/) to the end of the URI, if the URI ends with the string baz.

Note: The %u format string strips the first character of the URI path. This is usually a slash (/), and this modification is done purely for aesthetic reasons. Thus when describing a URL, the string http://%h/%u is used instead of http://%h%u.

For more information on HTTP redirection and format strings, see HTTP redirection, on page 4-37.

Rule statements

A rule consists of statements. Rules support the following types of statements:

  • An if statement asks a true or false question and, depending on the answer, takes some action.
  • A discard statement discards the request. This statement must be conditionally associated with an if statement.
  • A use statement uses a selected pool for load balancing. This statement must be conditionally associated with an if statement.
  • A cache statement uses a selected pool for load balancing. This statement can be conditionally associated with an if statement.
  • A redirect statement sends traffic to a specific destination, rather than to a pool for load balancing.

The primary possible statements expressed in command line syntax are:

if (<question>) {<statement>} [else {<statement>}]

discard

use ( <pool_name> )

cache ( <expressions> )

redirect ( <redirect URL> )

For detailed syntax of these rules statements, see To define a rule from the command line, on page 4-63.

Questions (expressions)

A question or expression is asked by an if statement and has a true or false answer. A question or expression has two parts: a predicate (operator), and one or two subjects (operands).

There are two types of subjects (operands); some subjects change and some subjects stay the same.

  • Changing subjects are called variable operands.
  • Subjects that stay the same are called constant operands.

    A question, or expression, asks questions about variable operands by comparing their current value to constant operands with relational operators.

Constant operands

Possible constant operands are:

  • IP protocol constants, for example:
    UDP or TCP
  • IP addresses expressed in masked dot notation, for example:

    206.0.0.0 netmask 255.0.0.0

  • Strings of ASCII characters, for example:

    pictures/bigip.gif

  • Regular expression strings

Variable operands (variables)

Since variable operands change their value, they need to be referred to by a constant descriptive name. The variables available depend on the context in which the rule containing them is evaluated. Possible variable operands are:

  • IP packet header variables, such as:

    • client_addr. Used by a client to represent a source IP address. This variable is replaced with an unmasked IP address.
    • server_addr. Used to represent a destination IP address. This variable is replaced with an unmasked IP address. The server_addr variable is used to represent the destination address of the packet. This variable is useful when load balancing traffic to a wildcard virtual server.
    • client_port. Used to represent a client port number.
    • server_port. Used to represent a server port number.
    • ip_protocol. Used to represent an IP protocol. This variable is replaced with a numeric value representing an IP protocol such as TCP, UDP, or IPSEC.
    • link_qos. Used to represent the Quality of Service (QoS) level.
    • ip_tos. Used to represent that Type of Service (ToS) level.
  • HTTP request strings

    All HTTP request string variables are replaced with string literals. HTTP request variables are referred to in command line syntax by a predefined set of names. Internally, an HTTP request variable points to a method for extracting the desired string from the current HTTP request header data. Before an HTTP request variable is used in a relational expression, it is replaced with the extracted string.

    The evaluation of a rule is triggered by the arrival of a packet. Therefore, variables in the rule may refer to features of the triggering packet. In the case of a rule containing questions about an HTTP request, the rule is evaluated in the context of the triggering TCP SYN packet until the first HTTP request question is encountered. After the proxy, the rule continues evaluation in the context of the HTTP request packet, and variables may refer to this packet. Before a variable is compared to the constant in a relational expression, it is replaced with its current value.

    The allowed variable names are:

    • http_method
      The http_method is the action of the HTTP request. Common values are GET or POST.
    • http_uri
      The http_uri is the URL, but does not include the protocol and the fully qualified domain name (FQDN). For example, if the URL is http://www.url.com/buy.asp, then the URI is /buy.asp.
    • http_version
      The http_version is the HTTP protocol version string. Possible values are "HTTP/1.0" or "HTTP/1.1".
    • http_host
      The http_host is the value in the Host: header of the HTTP request. It indicates the actual FQDN that the client requested. Possible values are a FQDN or a host IP address in dot notation.
    • http_cookie <cookie name>
      The HTTP cookie header is the value in the Cookie: header for the specified cookie name. An HTTP cookie header line can contain one or more cookie name value pairs. The http_cookie <cookie name> variable evaluates to the value of the cookie with the name <cookie name>. For example, given a request with the following cookie header line:

      Cookie: green-cookie=4; blue-cookie=horses

      The variable http_cookie blue-cookie evaluates to the string horses. The variable http_cookie green-cookie evaluates to the string 4.
    • http_header <header_tag_string>
      The variable http_header evaluates the string following an HTTP header tag that you specify. For example, you can specify the http_host variable with the http_header variable. In a rule specification, if you wanted to load balance based on the host name andrew, the rule statement might look as follows:

      if ( http_header "Host" starts_with "andrew" ) { use ( andrew_pool ) } else { use ( main_pool ) }

Operators

In a rule, relational operators compare two operands to form relational expressions. Possible relational operators and expressions are described in Table 4.10.

The relational operators

Expression

Relational Operator

Are two IP addresses equal?

<address> equals <address>

Do a string and a regular expression match?

<variable_operand> matches_regex <regular_expression>

Are two strings identical?

<string> equals <string>

Is the second string a suffix of the first string?

<variable_operand> ends_with <string>

Is the second string a prefix of the first string?

<variable_operand> starts_with <string>

Does the first string contain the second string?

<variable_operand> contains <literal_string>

Is the first string a member of the second string?

<variable_operand> equals one of <class>

In a rule, logical operators modify an expression or connect two expressions together to form a logical expression. Possible logical operators and expressions are described in Table 4.11.

The logical operators

Expression

Logical Operator

Is the expression not true?

not <expression>

Are both expressions true?

<expression> and <expression>

Is either expression true?

<expression> or <expression>

Cache statements

A cache statement may be either the only statement in a rule or it may be nested within an if statement. Rules with cache statements are used to select pools based on HTTP header data. Table 4.12 describes the cache statement syntax

Description of cache statement syntax

Rule Syntax

Description

expression

A Boolean expression setting the condition or conditions under which the rule applies.

origin_pool <pool_name>

This required attribute specifies a pool of servers with all the content to which requests are load balanced when the requested content is not cacheable or when all the cache servers are unavailable or when you use a BIG-IP to redirect a miss request from a cache.

cache_pool <pool_name>

This required attribute specifies a pool of cache servers to which requests are directed to optimize cache performance.

hot_pool <pool_name>

This optional attribute specifies a pool of servers that contain content to which requests are load balanced when the requested content is frequently requested (hot). If you specify any of the following attributes in this table, the hot_pool attribute is required.

hot_threshold <hit_rate>

This optional attribute specifies the minimum number of requests for content that cause the content to change from cool to hot at the end of the period (hit_period).

cool_threshold <hit_rate>

This optional attribute specifies the maximum number of requests for specified content that cause the content to change from hot to cool at the end of the period.

hit_period <seconds>

This optional attribute specifies the period in seconds over which to count requests for particular content before deciding whether to change the hot or cool state of the content.

content_hash_size <sets_in_content_hash>

This optional attribute specifies the number subsets into which the content is divided when calculating whether content is hot or cool. The requests for all content in the same subset are summed and a single hot or cool state is assigned to each subset. This attribute should be within the same order of magnitude as the actual number of requests possible. For example, if the entire site is composed of 500,000 pieces of content, a content_hash_size of 100,000 would be typical.

For a description of pool selection based on HTTP header data, see Pool selection based on HTTP header data, on page 4-56.

Configuring rules

You can create rules from the command line or by using the Configuration utility. Each of these methods is described in this section.

To define a rule using the Configuration utility

  1. In the navigation pane, click Rules.
    This opens the Rules screen.
  2. Click the Add button.
    The Add Rule screen opens.
  3. In the Add Rule screen, fill in the fields to add a rule.
    You can type in the rule as an unbroken line, or you can use the Enter key to add line breaks.
  4. Click Done.

To define a rule from the command line

To define a rule from the command line, use the following syntax:

b rule <rule_name> '{ <if statement> | <cache_statement }'

Table 4.13 contains descriptions of all the elements you can use to create rules.

Elements for rules construction

Element

Description

rule definition

rule <rule_name { <if_statement> | <discard_statement> | <use_statement> | <<cache_statement> | 41

<redirect_statement>}

if statement

if ( <expression> ) { <statement> }
[ { else <statement> } ] [ { else if <statement> } ]

discard statement

discard |

use statement

use ( <pool_name> )

cache statement

cache ( <expression> ) { origin_pool <pool_name> cache_pool <pool_name> [ hot_pool <pool_name> ] [ hot_threshold <hit_rate> ] [ cool_threshold <hit_rate> ] [ hit_period <seconds> ][ content_hash_size <sets_in_content_hash> ] }

redirect statement

redirect ( <redirect URL> )

literal

<regex_literal>
<string_literal>
<address_literal>

regular expression literal

Is a string of 1 to 63 characters enclosed in quotes that may contain regular expressions

string literal

Is a string of 1 to 63 characters enclosed in quotes

address literal

<dot_notation_longword> [netmask <dot_notation_longword>]

Dot notation longword

<0-255>.<0-255>.<0-255>.<0-255>

variable

http_method
http_header <header tag>
http_version
http_uri
http_host
http_cookie <cookie_name>
link_qos
ip_tos
client_addr
server_addr
client_port
server_port

ip_protocol

binary operator

or
and
contains
matches
equals
starts_with
ends_with
matches_regex
one of
redirect to

Configuring a remote origin server

To ensure that a remote origin server or cache server responds to the BIG-IP rather than to the original cache server that generated the missed request, the BIG-IP also translates the source of the missed request to the translated address and port of the associated SNAT connection.

In order to enable these scenarios, you must:

  • Create a SNAT for each cache server.
  • Create a SNAT auto-mapping for bounceback.

Configuring a SNAT for each origin server

The SNAT translates the address of a packet from the cache server to the address you specify. For more information about SNATs, see SNATs, on page 4-121.

Creating a SNAT automap for bounceback

You must now configure a second SNAT mapping, in this case with the SNAT automap feature, so that when requests are directed to the origin server, the server will reply through the BIG-IP and not directly to the client. (If the BIG-IP replied directly to the client, the next request would then go directly to the origin server, removing the BIG-IP from the loop.) For more information about SNATs, see SNATs, on page 4-121.

Additional rule examples

This section contains additional examples of rules including:

  • Cookie rule
  • Language rule
  • Cache rule
  • AOL rule
  • Protocol specific rule

Cookie rule

Figure 4.32 shows a cookie rule that load balances based on the user ID that contains the word VIRTUAL.

Figure 4.32 Cookie rule example

 if ( exists http_cookie "user-id" and    
http_cookie "user-id" contains "VIRTUAL" ) {
use ( virtual_pool )
}
else {
use ( other_pool )
}

Language rule

Figure 4.33 shows a rule that load balances based on the language requested by the browser.

Figure 4.33 Sample rule that load balances based on the language requested by the browser

 if ( exists http_header "Accept-Language" ) {    
if ( http_header "Accept-Language" equals "fr" ) {
use ( french_pool )
}
else if ( http_header "Accept-Language" equals "sp" ) {
use (spanish_pool )
}
else {
use ( english_pool )
}

Cache rule

Figure 4.34 shows an example of a rule that you can use to send cache content, such as .gifs, to a specific pool.

Figure 4.34 An example of a cache rule

 if ( http_uri ends_with "gif" or    
http_uri ends_with "html" ) {
use ( cache_pool )
}
else {
use ( server_pool )
}

AOL rule

Figure 4.35 is an example of a rule that you can use to load balance incoming AOL connections.

Figure 4.35 An example of an AOL rule

 port 80 443 enable    
pool aol_pool {
min_active_members 1
member 12.0.0.31:80 priority 4

member 12.0.0.32:80 priority 3
member 12.0.0.33:80 priority 2
member 12.0.0.3:80 priority 1
}
pool other_pool {
member 12.0.0.31:80
member 12.0.0.32:80
member 12.0.0.33:80
member 12.0.0.3:80
}
pool aol_pool_https {
min_active_members 1
member 12.0.0.31:443 priority 4

member 12.0.0.32:443 priority 3
member 12.0.0.33:443 priority 2
member 12.0.0.3:443 priority 1
}
pool other_pool_https{
member 12.0.0.31:443
member 12.0.0.32:443
member 12.0.0.33:443
member 12.0.0.3:443
}
rule aol_rule {
if (client_addr equals one of aol) {
use ( aol_pool )

}
else {
use ( other_pool)
}
}

Figure 4.35 An example of an AOL rule

 rule aol_rule_https {    
if ( client_addr equals 152.163.128.0 netmask 255.255.128.0
or client_addr equals 195.93.0.0 netmask 255.255.254.0
or client_addr equals 205.188.128.0 netmask 255.255.128.0 ) {
use ( aol_pool_https )
}
else {
use ( other_pool_https)
}
}
virtual 15.0.140.1:80 { use rule aol_rule }
virtual 15.0.140.1:443 { use rule aol_rule_https special ssl 30 }

Rule using the ip_protocol variable

Figure 4.36 shows a rule that uses the ip_protocol variable.

Figure 4.36 An example of an IP protocol rule

 rule myrule {     
if ( ip_protocol == 37 ) {
use ( bootp_pool )
} else if ( ip_protocol == 22 ){
use ( ipsec_pool )
}
else {
use ( slow_pool )
}
}

Rule using IP address and port variables

Figure 4.37 shows a rule that uses the server_addr and server_port rule variables.

Figure 4.37 An example of an IP protocol rule

 rule myrule {     
if ( server_addr equals 10.0.0.0 netmask 255.255.0.0 ) {
use ( fast_pool )
} else if ( server_port equals 80 ){
use ( fast_pool )
}
else {
use ( slow_pool )
}
}

Rule using the one of operator

A good use of the one of operator in a rule is when you have a class such as that shown in Figure 4.38

 class images {
".gif"

".jpg"
".bmp"
}

Figure 4.38 An example of a class

Given the above class, you could create a rule that uses the one of operator to select a pool based on whether the value of the variable http_uri ends with a member of the class images. Figure 4.39 shows this rule.

 rule myrule {    
if ( http_uri ends_with one of images ) {
use ( image_pool )
}
else {
use ( dynamic_pool )
}

Figure 4.39 Example of a rule using the one of operator

Rules based on HTTP header insertion

You can create rules based on headers that an SSL Accelerator proxy has inserted into HTTP requests. For examples of these types of rules, see Inserting headers into HTTP requests, on page 4-95.

Virtual servers

A virtual server with its virtual address is the visible, routable entity through which nodes in a load balancing pool are made available to a client, either directly or indirectly through a rule. (The exception is the forwarding virtual server, which simply forwards traffic and has no associated pools.)

You must configure a pool of servers before you can create a virtual server that references the pool. Before you configure virtual servers, you need to know:

  • Which virtual server type meets your needs
  • Whether you need to activate optional virtual server properties

    Once you know which virtual server options are useful in your network, you can define any one of the four types of virtual servers.

Virtual server types

You can configure various types of virtual servers, depending on your needs. Table 4.14 shows the types of virtual servers that you can create.

Virtual server types

Virtual Server Type

Description

Standard virtual server

A standard virtual server is a virtual server with a full IP address. For example:

192.168.200.30:http

Wildcard virtual server

There are two types of wildcard servers:

A port-specific wildcard virtual server is a virtual server with a port specified. A port-specific wildcard virtual server is used to accept all traffic for a specific service.

A default wildcard virtual server is a wildcard virtual server with the service 0, *, or any. A default wildcard server acts like a default router, accepting all traffic that does not match a standard, network, or port-specific wildcard server.

Network virtual server

A network virtual server is a virtual server with a network IP address, allowing it to handle a whole range of addresses in a network. For example:

192.168.200.0:http

Forwarding virtual server

A forwarding virtual server is a virtual server without a pool that simply forwards traffic to the destination node.

Standard virtual servers

A standard virtual server represents a specific site, such as an Internet web site or an FTP site, and it load balances content servers that are members of a pool. The IP address that you use for a standard virtual server should match the IP address that DNS associates with the site's domain name.

Note: If you are using a 3-DNS Controller in conjunction with the BIG-IP, the 3-DNS Controller uses the IP address associated with the registered domain name in its own configuration. For details, refer to the 3-DNS Administrator Guide.

To define a standard virtual server using the Configuration utility

  1. In the navigation pane, click Virtual Servers.
  2. Click the Add button.
    The Add Virtual Server screen opens.
  3. In the Address box, type the virtual server's IP address or host name.
  4. In the Port box, either type a port number or select a service name from the list.
  5. In the Select Physical Resources screen, click the Pool button.
    If you want to assign a load balancing rule to the virtual server, click Rule and select a rule you have configured.
  6. In the Pool list, select the pool you want to apply to the virtual server.
  7. Click the Apply button.

To define a standard virtual server from the command line

Type the bigpipe virtual command as shown below. Also, remember that you can use host names in place of IP addresses, and that you can use standard service names in place of port numbers.

b virtual <virt_ip>:<service> use pool <pool_name>

For example, the following command defines a virtual server that maps to the pool my_pool:

b virtual 192.200.100.25:80 use pool my_pool

Note: If a virtual server is to have the the same IP address as a node in an associated VLAN, you must perform some additional configuration tasks. These tasks consist of: creating a VLAN group that includes the VLAN in which the node resides, assigning self-IP addresses to the VLAN group, and disabling the virtual server on the relevant VLAN. For information on creating VLAN groups and assigning self IP addresses to them, see Chapter 3, Creating VLAN groups, on page 3-14. For information on disabling a virtual server for a specific VLAN, see Enabling or disabling a virtual server, on page 4-84.

Wildcard virtual servers

Wildcard virtual servers are a special type of virtual server designed to manage network traffic for transparent network devices, such as transparent firewalls, routers, proxy servers, or cache servers. A wildcard virtual server manages network traffic that has a destination IP address unknown to the BIG-IP. A standard virtual server typically represents a specific site, such as an Internet web site, and its IP address matches the IP address that DNS associates with the site's domain name. When the BIG-IP receives a connection request for that site, the BIG-IP recognizes that the client's destination IP address matches the IP address of the virtual server, and it subsequently forwards the client to one of the content servers that the virtual server load balances.

However, when you are load balancing transparent nodes, a client's destination IP address is going to seem random. The client is connecting to an IP address on the other side of the firewall, router, or proxy server. In this situation, the BIG-IP cannot match the client's destination IP address to a virtual server IP address. Wildcard virtual servers resolve this problem by not translating the incoming IP address at the virtual server level on the BIG-IP. For example, when the BIG-IP does not find a specific virtual server match for a client's destination IP address, it matches the client's destination IP address to a wildcard virtual server. The BIG-IP then forwards the client's packet to one of the firewalls or routers that the wildcard virtual server load balances, which in turn forwards the client's packet to the actual destination IP address.

Default vs. port-specific wildcard servers

When you configure wildcard virtual servers and the nodes that they load balance, you can use a wildcard port (port 0) in place of a real port number or service name. A wildcard port handles any and all types of network services.

A wildcard virtual server that uses port 0 is referred to as a default wildcard virtual server, and it handles traffic for all services. A port-specific wildcard virtual server handles traffic only for a particular service, and you define it using a service name or a port number. If you use both a default wildcard virtual server and port-specific wildcard virtual servers, any traffic that does not match either a standard virtual server or one of the port-specific wildcard virtual servers is handled by the default wildcard virtual server.

By default, a default wildcard virtual server is enabled for all VLANs. However, you can specifically disable any VLANs that you do not want the default wildcard virtual server to support. Disabling VLANs for the default wildcard virtual server is done by creating a VLAN disabled list. Note that a VLAN disabled list applies to default wildcard virtual servers only. You cannot create a VLAN disabled list for a wildcard virtual server that is associated with one VLAN only.

You can use port-specific wildcard virtual servers for tracking statistics for a particular type of network traffic, or for routing outgoing traffic, such as HTTP traffic, directly to a cache server rather than a firewall or router.

We recommend that when you define transparent nodes that need to handle more than one type of service, such as a firewall or a router, you specify an actual port for the node and turn off port translation for the virtual server.

For the procedure to create a default wildcard server, see To create a default wildcard virtual server using the Configuration utility, on page 4-73.

Creating wildcard virtual servers

Creating a wildcard virtual server requires three steps. First, you must create a pool that contains the addresses of the transparent devices. Next, you must create the wildcard virtual server. Then you must turn port translation off for the virtual server. The following sections describe these steps, followed by the procedure for creating a default wildcard server.

To create a pool of transparent devices using the Configuration utility

To create a pool of transparent devices, use the Add Pool wizard, available from the Pools screen. For more information, see To create a pool using the Configuration utility, on page 4-3.

To create a wildcard virtual server using the Configuration utility

  1. In the navigation pane, click Virtual Servers.
  2. Click the Add button.
    The Add Virtual Server screen opens.
  3. In the Address box, type the wildcard IP address 0.0.0.0.
  4. In the Port box, type a port number, or select a service name from the list. Note that port 0 defines a wildcard virtual server that handles all types of services. If you specify a port number, you create a port-specific wildcard virtual server. The wildcard virtual server handles traffic only for the port specified. For more information, see Default vs. port-specific wildcard servers, on page 4-71.
  5. In Resources, click the Pool button.
  6. In the Pool list, select the pool you want to apply to the virtual server.
  7. Click the Apply button.

To turn off port translation for a wildcard virtual server using the Configuration utility

After you define the wildcard virtual server with a wildcard port, you must disable port translation for the virtual server.

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. In the virtual server list, click the virtual server for which you want to turn off port translation.
    The Virtual Server Properties screen opens.
  3. In the Enable Translation section, clear the Port box.
  4. Click the Apply button.

To create a wildcard virtual server from the command line

  1. Create the pool of transparent devices, using the bigpipe pool command. For example, you can create a pool of transparent devices called transparent_pool that uses the Round Robin load balancing mode:

    b pool transparent_pool { \

    member 10.10.10.101:80 member 10.10.10.102:80 \

    member 10.10.10.103:80 }

  2. Create a wildcard virtual server that maps to the pool transparent_pool. Because the members are firewalls and need to handle a variety of services, the virtual server is defined using port 0 (or * or any). You can specify any valid non-zero port for the node port and then turn off port translation for that port. In this example, service checks ping port 80. For example:

    b virtual 0.0.0.0:0 use pool transparent_pool

  3. Turn off port translation for the port in the virtual server definition. In the following example, port 80 is used for service checking. If you do not turn off port translation, all incoming traffic is translated to port 80.

    b virtual 0.0.0.0:0 translate port disable

To create a default wildcard virtual server using the Configuration utility

  1. In the Navigation pane, select Virtual Servers.
    The Virtual Servers screen displays.
  2. Click the Add button.
  3. In the Address field, type the IP address 0.0.0.0.
  4. Click Next.
  5. From the VLAN box, select all.
  6. Click Done.

To create a default wildcard virtual server from the command line

To create a default wildcard virtual server from the command line, use the bigpipe virtual command with the following syntax:

b virtual *:* use pool <pool_name>

Creating multiple wildcard servers

In previous releases, BIG-IP supported one wildcard virtual server only, designated by the IP address 0.0.0.0. With this release, you can define multiple wildcard virtual servers, all running simultaneously. Each wildcard virtual server must be assigned to an individual VLAN, and therefore handles packets for that VLAN only.

To create multiple wildcard virtual servers, you can use either the Configuration utility or the bigpipe virtual command.

To create multiple wildcard virtual servers using the Configuration utility

To create multiple wildcard virtual servers using the Configuration utility, use the following procedure:

  1. In the Navigation pane, select Virtual Servers.
    The Virtual Servers screen displays.
  2. Click the Add button.
  3. In the Address field, type the IP address 0.0.0.0.
  4. In the Service field, type the name of a service or select a service from the list box.
  5. Click Next.
  6. From the VLAN box, choose a VLAN name. Selecting all creates a default wildcard virtual server.
  7. Click Next.
  8. Continue configuring all properties for the wildcard virtual server. Note that on the Configure Basic Properties screen, if you are creating a default wildcard virtual server, you can disable any VLANs associated with that wildcard virtual server.
  9. Click Done.

    Repeat for each wildcard virtual server that you want to create.

To create multiple wildcard virtual servers from the command line

To create a separate wildcard virtual server per VLAN from the command line, use the following command-line syntax:

b virtual <vlan_name> use pool <pool_name>

For example, the following commands define two wildcard virtual servers, the first for VLAN internal, and the second for VLAN external:

b virtual internal use pool my_pool

b virtual external use pool my_pool

Network virtual servers

You can configure a network virtual server to handle a whole network range, instead of just one IP address, or all IP addresses (a wildcard virtual server). For example, the virtual server in Figure 4.40 handles all traffic addresses in the 192.168.1.0 network.

Figure 4.40 A sample network virtual server

 bigpipe virtual 192.168.1.0:0 {    
netmask 255.255.255.0 use pool ingress_firewalls
}

A network virtual server is a virtual server that has no bits set in the host portion of the IP address. The example above directs all traffic destined to the subnet 192.168.1.0/24 through the BIG-IP to the ingress_firewalls pool.

The netmask of a network virtual server establishes which portion of the address is actually the network of a network virtual server. By default, this is the netmask of the self IP address. In the example, the network mask of 255.255.255.0 states that the network portion of the address is 192.168.1, which in this case is obvious because only the last octet has a zero value.

A less obvious case would be the network virtual server 10.0.0.0:0. Here, the zero in the second octet is ambiguous: it could be a wildcard or it could be a literal zero. If it is a wildcard, this would be established by a netmask of 255.0.0.0. If it is a literal zero, this would be established by a netmask of 255.255.0.0.

Another way you can use this feature is to create a catch-all web server for an entire subnet. For example, you could create the following network virtual server, shown in Figure 4.41.

Figure 4.41 A catch-all web server configuration.

 bigpipe virtual 192.168.1.0:http {    
netmask 255.255.255.0 broadcast 192.168.1.255
use pool default_webservers
}

This configuration directs a web connection destined to any address within the subnet 192.168.1.0/24 to the default_webservers pool.

Forwarding virtual servers

A forwarding virtual server is just like other virtual servers, except that the virtual server has no nodes to load balance. It simply forwards the packet directly to the node. Connections are added, tracked, and reaped just as with other virtual servers. You can also view statistics for forwarding virtual servers.

To configure forwarding virtual servers using the Configuration utility

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. Click the Add button.
    The Add Virtual Server screen opens.
  3. Type the virtual server attributes, including the address and port number.
  4. Under Configure Basic Properties, clear the check from Enable Arp.
  5. On the Select Physical Resources screen, click the Forwarding button.
  6. Click the Apply button.

To configure a forwarding virtual server from the command line

Use the following syntax to configure forwarding virtual servers:

b virtual <virt_ip>:<service> forward

b virtual <virt_ip>:<service> arp disable

For example, to allow only one service in:

b virtual 206.32.11.6:80 forward

b virtual <virt_ip>:<service> arp disable

Use the following command to allow only one server in:

b virtual 206.32.11.5:0 forward

b virtual <virt_ip>:<service> arp disable

To forward all traffic, use the following command:

b virtual 0.0.0.0:0 forward

In some of the configurations described here, you need to set up a wildcard virtual server on one side of the BIG-IP to load balance connections across transparent devices. You can create another wildcard virtual server on the other side of the BIG-IP to forward packets to virtual servers receiving connections from the transparent devices and forwarding them to their destination.

Tip: If you do not want BIG-IP to load balance your traffic but do want to take advantage of certain pool attributes, you can instead use a feature called a forwarding pool. For more information on forwarding pools, see Forwarding pools, on page 4-47.

Note: If a forwarding virtual server is to have the the same IP address as a node in an associated VLAN, you must perform some additional configuration tasks. These tasks consist of: creating a VLAN group that includes the VLAN in which the node resides, assigning self-IP addresses to the VLAN group, and disabling the virtual server on the relevant VLAN. For information on creating VLAN groups and assigning self IP addresses to them, see Chapter 3, Creating VLAN groups, on page 3-14. For information on disabling a virtual server for a specific VLAN, see Enabling or disabling a virtual server, on page 4-84.

Virtual server options

For each type of virtual server, you can configure several options. These options are shown in 4.15.

Virtual server configuration options

Option

Description

Mirroring information

You can use mirroring to maintain the same state information in the standby unit that is in the active unit, allowing transactions such as FTP file transfers to continue as though uninterrupted.

Netmask and broadcast

You can override the default netmask and broadcast for a network virtual address.

Connection limits

You can set a concurrent connection limit on one or more virtual servers.

Translation properties

You can turn port translation off for a virtual server if you want to use the virtual server to load balance connections to any service.

Dynamic connection rebinding

You can cause any connections that were made to a node address or service to be redirected to another node, if the original node transitions to a down state.

Last hop pools

You can direct reply traffic to the last hop router using a last hop pool. This overrides the auto_lasthop setting.

Rules

You can configure a virtual server to reference a rule. Rules are primarily used for selecting pools during load balancing.

Software acceleration

You can speed up packet flow for TCP connections when the packets are not fragmented.

Mirroring virtual server state

Mirroring provides seamless recovery for current connections, persistence information, SSL persistence, or sticky persistence when a BIG-IP fails. When you use the mirroring feature, the standby unit maintains the same state information as the active unit. Transactions such as FTP file transfers continue as though uninterrupted.

Note: Mirroring slows BIG-IP performance and is primarily useful for long-lived services like FTP and Telnet. Mirroring is not useful for short-lived connections like HTTP.

Since mirroring is not intended to be used for all connections and persistence, it must be specifically enabled for each virtual server.

To control mirroring for a virtual server

To control mirroring for a virtual server, use the bigpipe virtual mirror command to enable or disable mirroring of persistence information, or connections, or both. The syntax of the command is:

b virtual <virt addr>:<service> mirror [conn] enable|disable

To mirror connection information for the virtual server

Use the conn argument to mirror connection information for the virtual server. To display the current mirroring setting for a virtual server, use the following syntax:

b virtual <virt addr>:<service> mirror [conn] show

If you do not specify conn for connection information, the BIG-IP assumes that you want to display that information.

Note: If you set up mirroring on a virtual server that supports FTP connections, you need to mirror the control port virtual server, and the data port virtual server.

The following example shows the two commands used to enable mirroring for virtual server v1 on the FTP control and data ports:

b virtual v1:21 mirror conn enable

b virtual v1:20 mirror conn enable

Displaying information about virtual servers

You can display information about all virtual servers in your configuration, or you can display information about one or more specific virtual servers.

To display information about all virtual servers in your configuration

Use the following syntax to display information about all virtual servers included in the configuration:

b virtual show

To display information about one or more virtual servers in your configuration

Use the following syntax to display information about one or more virtual servers included in the configuration:

b virtual <virt_ip>:<service> [...<virt_ip>:<service>] show

The command displays information such as the nodes associated with each virtual server, the nodes' status, and the current, total, and maximum number of connections managed by the virtual server since the BIG-IP was last rebooted.

Setting a user-defined netmask and broadcast

The default netmask for a virtual address, and for each virtual server hosted by that virtual address, is determined by the network class of the IP address entered for the virtual server. The default broadcast is automatically determined by the BIG-IP, and it is based on the virtual address and the current netmask. You can override the default netmask and broadcast for a network virtual address only.

All virtual servers hosted by the virtual address use the netmask and broadcast of the virtual address, whether they are default values or they are user-defined values.

To set a custom netmask and broadcast

If you want to use a custom netmask and broadcast, you define both when you define the network virtual server:

b virtual <virt_ip>[:<service>] [vlan <vlan_name> disable | enable] [netmask <ip>] [broadcast <ip>] use pool <pool_name>

Note: The BIG-IP calculates the broadcast based on the IP address and the netmask. In most cases, a user-defined broadcast address is not necessary.

Again, even when you define a custom netmask and broadcast in a specific network virtual server definition, the settings apply to all virtual servers that use the same virtual address. The following sample command shows a user-defined netmask and broadcast:

b virtual www.SiteOne.com:http \

netmask 255.255.0.0 \

broadcast 10.0.140.255 \

use pool my_pool

The /bitmask option shown in the following example applies network and broadcast address masks. In this example, a 24-bit bitmask sets the network mask and broadcast address for the virtual server:

b virtual 206.168.225.0:80/24 use pool my_pool

You can generate the same broadcast address by applying the 255.255.255.0 netmask. The effect of the bitmask is the same as applying the 255.255.255.0 netmask. The broadcast address is derived as 206.168.225.255 from the network mask for this virtual server.

Setting a connection limit

The default setting is to have no limit to the number of concurrent connections allowed on a virtual server.

To set a concurrent connection limit

You can set a concurrent connection limit on one or more virtual servers using the following command:

b virtual <virt_ip> [:<service>] \ [...<virt_ip>[:<service>]] limit <max_conn>

The following example shows two virtual servers set to have a concurrent connection limit of 5000 each:

b virtual www.SiteOne.com:http www.SiteTwo.com:ssl limit 5000

To turn off the connection limit

To turn off the connection limit, set the <max conn> variable to zero:

b virtual <virt_ip>[:<service>] [...<virt_ip>[:<service>] ] limit 0

Setting translation properties for virtual addresses and ports

Turning off port translation for a virtual server is useful if you want to use the virtual server to load balance connections to any service.

You can also configure the translation properties for a virtual server address. This option is useful when the BIG-IP is load balancing devices that have the same IP address. This is typical with the nPath routing configuration where duplicate IP addresses are configured on the loopback device of several servers.

To enable or disable port translation

Use the following syntax to enable or disable port translation for a virtual server:

b virtual <virt_ip>:<service> translate port enable | disable | show

To enable or disable address translation

Use the following syntax to enable or disable address translation for a virtual server:

b virtual <virt_ip>:<service> translate addr enable | disable | show

Setting dynamic connection rebinding

Dynamic connection rebinding is a feature for those virtual servers that are load balancing transparent devices such as firewalls or routers. Dynamic connection rebinding causes any connections that were made to a node address or service to be redirected to another node, if the original node transitions to a down state. In this case, all connections to the failed node that were made through the virtual server are moved to a newly-selected node from the virtual server's pool. The new node is selected using the pool's load-balancing algorithm. By default, dynamic connection rebinding is disabled.

Note: This feature does not apply to virtual servers for non-transparent devices because they usually involve application state between the client and server node. This state cannot be recreated with a newly-selected node.

To enable, disable, or show the status of dynamic connection rebinding, you can use either the Configuration utility or the bigpipe virtual command.

To set dynamic connection rebinding using the Configuration utility

To set dynamic connection rebinding using the Configuration utility, use the following procedure.

  1. In the Navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. Select the IP address for the virtual server. This displays the Properties page for that server.
  3. Check the Enable Connection Rebind check box.
  4. Click the Apply button.

To set dynamic connection rebinding from the command line

To manage dynamic connection rebinding using the bigpipe virtual command, type one of the following commands.

b virtual <ip>:<service> conn rebind enable
b virtual <ip>:<service> conn rebind disable
b virtual <ip>:<service> conn rebind show

Setting up last hop pools for virtual servers

In cases where you have more than one router sending connections to a BIG-IP, connections are automatically sent back through the same router from which they were received when the auto_lasthop global variable is enabled, as it is by default. If you want to exclude one or more routers from auto-lasthop, or if the global auto_lasthop is disabled for any reason (for example, you may not want it for an SSL gateway), you can use a last hop pool instead. (If auto_lasthop is enabled, the last hop pool takes precedence over it.)

To configure a last hop pool, you must first create a pool containing the router inside addresses. After you create the pool, use the following syntax to configure a last hop pool for a virtual server:

b virtual <virt_ip>:<service> lasthop pool <pool_name> | none | show

Configuring virtual servers that reference rules

Once you have created a rule, you must configure the virtual server to reference the rule. You can configure a virtual server to reference a rule by using either the Configuration utility or the bigpipe command.

To configure a virtual server that references a rule using the Configuration utility

  1. In the navigation pane, click Virtual Servers.
    The Virtual Servers screen opens.
  2. Add the attributes you want for the virtual server such as address, port, unit ID, and interface.
  3. In the Resources section, click Rule.
  4. In the list of rules, select the rule you want to apply to the virtual server.
  5. Click the Apply button.

To configure a virtual server that references a rule from the command line

There are several elements required for defining a virtual server that references a rule from the command line:

b virtual <virt_serv_key> { <virt_options> <rule_name_reference> }

Each of these elements is described in Table 4.16.

The command line rule elements

Rule element

Description

<virt_serv_key>

A virtual server key definition:

<virtual_address>:<virt_port> [unit <ID>]

<virt_options>

Virtual server options. For more information, see Virtual server options, on page 4-77.

<rule_name_reference>

A rule name reference. Rule names are strings of 1 to 31 characters.

use rule <rule_name>

Turning software acceleration off for virtual servers using IPFW rate filters

The software acceleration feature speeds packet flow for TCP connections when the packets are not fragmented. For configurations with no IPFW rate filter present, software acceleration is turned on by default by giving the global variable fastflow_active a default value of auto. The variable fastflow_active auto enables acceleration globally if IPFW filters are not present, and disables it globally if IPFW filters are present. (This is because, with acceleration on, IPFW only examines the first SYN packet in any given connection, rather than filtering all packets.) If you want to turn on acceleration globally but turn it off for the specific virtual servers that use IPFW rate filters, you must change the fastflow_active setting from auto to on, then disable the virtual servers individually using the bigpipe virtual <ip>:<service> accelerate disable command.

To set software acceleration controls from the command line

To enable software acceleration globally in a way that can be overridden for individual virtual servers, set the bigpipe global variable fastflow_active to on with the following command:

b global fastflow_active on

Then, to disable software acceleration for individual virtual servers that use IPFW rate filtering, use the following bigpipe command:

b virtual <ip>:<service> accelerate disable

For example, if you want to turn acceleration off for the virtual server 10.10.10.50:80, type the following command:

b virtual 10.10.10.50:80 accelerate disable

You can define a virtual server with acceleration disabled using the following syntax:

b virtual <ip>:<service> use pool the_pool accelerate disable

For example, if you want to define the virtual server 10.10.10.50:80 with the pool IPFW_pool and acceleration turned off, type the following command:

b virtual 10.10.10.50:80 use pool IPFW_pool accelerate disable

Additional virtual server tasks

Once you have created a virtual server and configured options for it, you can perform the following tasks.

Enabling or disabling a virtual server

You can remove an existing virtual server from network service, or return the virtual server to service, using the disable and enable keywords. As an option, you can enable or disable a virtual server for a specific VLAN only.

When you disable a virtual server, the virtual server no longer accepts new connection requests, but it allows current connections to finish processing before the virtual server goes down.

To disable or enable a virtual server using the Configuration utility

  1. In the navigation pane, click Virtual Servers.
    The list of virtual servers displays.
  2. Click on the virtual server that you want to disable or enable.
    This displays the properties for that virtual server.
  3. If you want to disable the virtual server for all VLANs, clear the Enabled check box. If you want to disable the virtual server for a specific VLAN, locate the VLANs Disabled box and move the relevant VLAN name from the Existing list to the Disabled list, using the arrows (>>).
  4. If you want to enable the virtual server for all VLANs, click the Enabled check box (if not already checked). If you want to enable the virtual server for a specific VLAN, locate the VLANs Disabled box and move the relevant VLAN name from the Disabled list to the Existing list, using the arrows (>>).
  5. Click Done.

Note: If the Enabled check box is checked and no VLANs are listed in the Disabled list of the VLANs Disabled box, the virtual server is enabled for all VLANs. If the Enabled check box is not checked, the virtual server is disabled for all VLANs.

To disable or enable a virtual server from the command line

Use the following syntax to disable a virtual server from network service:

b virtual <virt_ip>:<service> [...<virt_ip>:<service>] disable

If you want to disable or enable a virtual server for one or more specific VLANs only, use the following syntax:

b virtual <virt_ip>:<service> vlans <vlan_list> disable | enable

Use the following syntax to return a virtual server to network service:

b virtual <virt_ip>:<service> enable

Note: If you do not specify a VLAN name with the b virtual command, the virtual server is enabled or disabled on all VLANs.

Enabling or disabling a virtual address

You can remove an existing virtual address from network service, or return the virtual address to service, using the disable and enable keywords. Note that when you enable or disable a virtual address, you inherently enable or disable all of the virtual servers that use the virtual address.

b virtual <virt_ip> disable

Use the following syntax to return a virtual address to network service:

b virtual <virt_ip> enable

Displaying information about virtual addresses

You can also display information about the virtual addresses that host individual virtual servers. Use the following syntax to display information about one or more virtual addresses included in the configuration:

b virtual <virt_ip> [... <virt_ip> ] show

The command displays information such as the virtual servers associated with each virtual address, the status, and the current, total, and maximum number of connections managed by the virtual address since the BIG-IP was last rebooted, or since the BIG-IP became the active unit (redundant configurations only).

Deleting a virtual server

Use the following syntax to permanently delete one or more virtual servers from the BIG-IP configuration:

b virtual <virt_ip>:<service> [... <virt_ip>:<service>] delete

Resetting statistics for a virtual server

Use the following command to reset the statistics for an individual virtual server:

b virtual [<virt_ip:port>] stats reset

Using other BIG-IP features with virtual servers

After you create a pool and define a virtual server that references the pool, you can set up additional features, such as network address translation (NATs) or extended content verification (ECV). For details on network address translations, see NATs, on page 4-131. For details on persistence for connections that should return to the node to which they last connected, see Persistence, on page 4-21.

Proxies

BIG-IP supports two types of proxies--An SSL Accelerator proxy, and a content converter proxy. Using either the Configuration utility or the bigpipe proxy command, you can create, delete, modify, or display the SSL or content converter proxy definitions on the BIG-IP.

For detailed information about setting up the SSL Accelerator feature, see the BIG-IP Solutions Guide, Chapter 9, Configuring an SSL Accelerator. For detailed information about setting up the content converter feature, see the BIG-IP Solutions Guide, Chapter 14, Configuring a Content Converter.

The SSL Accelerator proxy

The SSL Accelerator feature allows the BIG-IP to accept and terminate any connections that are sent via a fully-encapsulated SSL protocol. For example, the BIG-IP can accept HTTPS connections (HTTP over SSL), connect to a web server, retrieve the page, and then send the page to the client.

A key component of the SSL Accelerator feature is that the BIG-IP can retrieve the web page using an unencrypted HTTP request to the content server. With the SSL Accelerator feature, you can configure an SSL proxy on the BIG-IP that decrypts HTTP requests that are encrypted with SSL. Decrypting the request offloads SSL processing from the servers to the BIG-IP. This also allows the BIG-IP to use the header of the HTTP request to intelligently control how the request is handled. (You can optionally configure requests to the servers to be re-encrypted to maintain security on the server side of the BIG-IP as well, using a feature called SSL-to-server. While the servers must then handle the final decryption and re-encryption, SSL processing is still faster than if the entire task were left to the servers.)

When the SSL proxy on the BIG-IP connects to the content server, and address translation is not enabled, the proxy uses the original client's IP address and port as its source address and port. In doing so, the proxy appears to be the client, for logging purposes.

BIG-IP offers several options for configuring an SSL Accelerator proxy. These options are configured separately for each SSL proxy that you create. You can configure these options at the time that you create the proxy.

Note: Before configuring an SSL proxy, you must either obtain a valid x509 certificate from a Trusted certificate authority, or generate a valid temporary certificate. In either case, this certificate file must be in PEM format.

Table 4.17 lists the configurable SSL proxy options.

Configuration options for the SSL Accelerator

Options

Description

SSL-to-Server configuration

Causes the BIG-IP to re-encrypt decrypted requests before sending them to the server, as a way to maintain server-side security.

Client-side authentication

Allows you to configure the SSL proxy to either request, require, or ignore certificates presented by a client.

Server-side authentication

Allows you to configure certificate authentication between the SSL proxy and a content server. This capability is part of the SSL-to-Server feature.

HTTP header insertion

Allows you to configure an SSL proxy to insert various types of headers into HTTP requests. For more information, see Inserting headers into HTTP requests.

Specification of ciphers and protocol versions

Allows you to configure an SSL proxy to require specific ciphers or protocol versions. For more information, see Specifying SSL ciphers and protocol versions.

Configuration of trusted CAs

Allows you to configure certificate chaining and verification, as well as to configure the proxy to send to a client a list of CAs that the proxy trusts.

Rewriting of HTTP redirection

Allows you to configure the proxy to convert HTTP redirects to HTTPS redirects.

SSL session cache configuration

Allows you to configure the proxy to set a timeout value and a size for the SSL session cache.

SSL proxy failover configuration

Allows you to configure the proxy to initiate a failover on a redundant BIG-IP in the event of a fatal cryptographic hardware failure.

Shutdown configuration

Allows you to configure the way in which the proxy manages clean and unclean shutdowns of SSL connections.

Disabling of arp requests

Allows you to disable the proxy address for ARP requests.

lasthop pool configuration

Allows you to add a last hop pool to an SSL proxy.

proxy deletion

Allows you to delete an SSL proxy.

Creating an SSL Accelerator Proxy

When creating an SSL Accelerator proxy, you can enable the proxy to handle either client-side SSL connections only, or both client-side and server-side SSL connections. The following procedures describe how to configure the SSL proxy for client-side connections only. To configure the proxy for server-side connections, see Configuring SSL-to-Server, on page 4-90.

To create an SSL proxy using the Configuration utility

  1. In the navigation pane, click Proxies.
    The Proxies screen opens.
  2. Click the ADD button.
    The Add Proxy screen opens.
  3. In the Proxy Type field, check the box labeled SSL.
  4. Configure the remaining attributes that you want to use with the proxy
  5. Click Done.

To create an SSL proxy from the command line

Use the following command-line syntax to create an SSL proxy:

b proxy <ip>:<service> [unit <unit_id>] \

target <server|virtual> <ip>:<service> \

clientssl enable \

clientssl key <clientssl_key> \

clientssl cert <clientssl_cert>

The following example creates an SSL proxy:

b proxy 10.1.1.1:443 unit 1 \

target virtual 20.1.1.1:80 \

clientssl enable \

clientssl key my.server.net.key \

clientssl cert my.server.net.crt

When the SSL proxy is written in the /config/bigip.conf file, it looks like the sample in Figure 4.42.

Figure 4.42 An example of an SSL proxy configuration

 proxy 10.1.1.1:443 unit 1 {     
target virtual 20.1.1.1:http
clientssl enable
clientssl key my.server.net.key
clientssl cert my.server.net.crt
}

Configuring SSL-to-Server

Once the SSL Accelerator proxy has decrypted a client request, you might want the BIG-IP to re-encrypt that request before it sends the request to the server, to maintain server-side security. This feature is known as SSL-to-Server. To implement this feature, you can use either the Configuration utility or the command line.

Note: The SSL-to-server feature requires that you create an SSL proxy, described in Creating an SSL Accelerator Proxy, on page 4-88.

Enabling the SSL-to-Server option

You can configure SSL-to-Server using either the Configuration utility or the command line.

To configure SSL-to-Server using the Configuration utility

  1. In the navigation pane, click Proxies.
    The Proxies screen opens.
  2. Click the ADD button.
    The Add Proxy screen opens.
  3. In the Proxy Type box, check the boxes labeled SSL and ServerSSL.
  4. Configure the remaining attributes that you want to use with the SSL proxy and the SSL-toServer feature.

To configure SSL-to-Server from the command line

Use a command such as in the following example to create an SSL-to-server proxy:

b proxy 10.1.1.1:443 \
target virtual 20.1.1.10:443 \
clientssl enable \
clientssl key my.server.net.key \
clientssl cert my.server.net.crt \
serverssl enable

You must either configure trusted server-side Certificate Authorities or configure the SSL proxy to ignore server-side certificates. For more information, see Configuring server certificate authentication, on page 4-92.

Figure 4.43 shows the state of the /config/bigip.conf file, after creating an SSL proxy with SSL-to-Server enabled. Note that the certificate and key files for client-side SSL connections have also been configured.

Figure 4.43 SSL proxy entries in /config/bigip.conf wth SSL-to-Server enabled

 proxy 10.1.1.1:443 unit 1 {     
target virtual 20.1.1.1:https
clientssl enable
clientssl key my.server.net.key
clientssl cert my.server.net.crt
serverssl enable
}

Configuring client certificates

This option extends the SSL-to-Server feature by allowing the BIG-IP to authenticate itself using client certificates. You can thus specify a key file and a certificate file for the proxy as an SSL client, as it acts on the server side. When a server-side SSL certificate is specified, the certificate is used only if the server requests client authentication.

To configure client certificates using the Configuration utility

Configuring client certificates for SSL-to-Server using the Configuration utility is similar to configuring the existing client SSL key/certificate pair.

  1. From navigation pane, click Proxies.
  2. Click the Add button.
  3. Check the SSL and ServerSSL check boxes.
  4. In the boxes labeled Server SSL Certificate and Server SSL Key, either type the names of the key and certificate files or select the names from a list of available key and certificate files.
  5. Click Done.

To configure client keys and certificates from the command line

When configuring SSL-to-Server, you can use the bigpipe proxy command to designate a key file and a certificate file. This is done as is the following example:

b proxy 10.1.1.1:443 \

target virtual 20.1.1.10:443 \

clientssl enable \

clientssl key my.server.net.key \

clientssl cert my.server.net.crt \

serverssl enable \

serverssl key my.client.net.key \

serverssl cert my.client.net.crt

Figure 4.44 shows the state of the /config/bigip.conf file, after both creating an SSL proxy with SSL-to-Server enabled and configuring the certificates and keys for both client-side and server-side SSL connections.

Figure 4.44 SSL proxy entries in /config/bigip.conf with server-side certificate and key files configured

 proxy 10.1.1.1:443 unit 1 {     
target virtual 20.1.1.1:https
clientssl enable
clientssl key my.server.net.key
clientssl cert my.server.net.crt
serverssl enable
serverssl key my.client.net.key
serverssl cert my.client.net.crt
}

Configuring server certificate authentication

You can verify server certificates, as well as specify the maximum number of certificates to be traversed in a server certificate chain.

Tip: In addition to configuring certificate authentication, you must also configure the trusted CAs. For more information, see Specifying a list of trusted Certificate Authorities (CAs), on page 4-102.

Verifying server certificates

To implement certificate authentication on the server side (that is, between the SSL proxy and the server), you can configure the proxy to either require the server to present a certificate or ignore the presentation of a certificate. Note, however, that you cannot require the server to present a certificate if anonymous cipher suites are negotiated.

If this option is set to require (the default setting), the proxy verifies any certificate presented by the server. If this verification fails, the SSL connection also fails, and the corresponding client connection is closed. If this option is set to ignore, verification fails only when a certificate is presented by the server and the certificate is expired or malformed.

To verify server certificates using the Configuration utility

  1. From navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Server Certificate field, select require or ignore from the box.
  4. Click Done.

To verify server certificates from the command line

This option is specified as serverssl server cert on the bigpipe proxy command line. The following command shows an example.

b proxy <ip>:<service> serverssl server cert require

Specifying traversal of certificate chains

In addition to the option to require or ignore a certificate presented by the server, SSL-to-Server has an option to specify the maximum number of certificates that can be traversed in a server certificate chain.

To configure certificate traversal using the Configuration utility

  1. From navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Authentication depth box, type a whole number. The default setting is 9.
  4. Click Done.

To configure certificate traversal from the command line

On the bigpipe proxy command line, this option is specified as serverssl authenticate depth, followed by a whole number representing the maximum number of certificates to be traversed. The following command shows an example.

b proxy <ip>:<service> serverssl authenticate depth 8

Configuring client-side authentication

This feature offers several options pertaining to client authentication. First, you can set the basic authentication option, which determines the extent to which an SSL proxy authenticates a client. Second, you can configure the SSL proxy to authenticate a client either once per SSL session or also upon each subsequent reuse of the session. Finally, you can specify the maximum number of certificates to be traversed in a client certificate chain. The following two sections explain these options.

Tip: In addition to configuring certificate authentication, you must also configure the trusted CAs. In so doing, it is recommended that you also configure the list of advertised CAs, to ensure that all clients know which CAs are trusted by the proxy. For more information, see Advertising a Trusted CA list, on page 4-104.

Basic authentication options

You can configure an SSL proxy to handle authentication of clients in three ways:

  • You can configure the proxy to request and verify a client certificate. In this case, the SSL proxy always grants access regardless of the status or absence of the certificate.
  • You can configure the proxy to require a client to present a valid and trusted certificate before granting access.
  • You can configure the proxy to ignore a certificate (or lack of one) and therefore never authenticate the client. This is the default setting.

Tip: The request option works well with the header insertion feature. Configuring the SSL proxy to insert client certificate information into an HTTP client request and to authenticate clients based on the request option allows the BIG-IP or a server to then perform actions such as redirecting the request to another server, or sending different content back to the client.

To configure client-side authentication using the Configuration utility

  1. From the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Client Certificate box, choose either the Request, Require, or Ignore option.
  4. Click Done.

To configure client-side authentication from the command-line

To configure client-side authentication from the command line, use the bigpipe proxy command and specify the desired option, as follows:

b proxy <ip>:<service> [clientssl] client cert <request | require | ignore>

Additional authentication options

If an SSL proxy is configured to verify client certificates, you can use two other options to configure client authentication in more detail: per-session authentication and authentication depth.

Per-session authentication

You can configure an SSL proxy to require authentication either once per SSL session, or once per session and upon each subsequent reuse of an SSL session. The default setting for this option is once, which causes the SSL proxy to request a client certificate and authenticate the client once per session.

To modify per-session authentication using the Configuration utility

You can modify the SSL proxy to require authentication not only once per session, but also upon each subsequent reuse of an SSL session.

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. Click on the Client Authenticate Once box. This changes the setting from once to always.
  4. Click Done.

To modify per-session authentication from the command line

To modify the SSL proxy to require authentication not only once per session, but also upon each subsequent reuse of an SSL session, specify always argument with the bigpipe proxy command, as follows. This changes the setting from once to always.

bigpipe proxy <ip>:<service> [clientssl] authenticate <once | always>

Authentication depth

Using this option, you can configure the maximum number of certificates that can be traversed in the client certificate chain. The default value is nine. If a longer chain is provided, and the client has not been authenticated within this number of traversals, client certificate verification fails.

To configure authentication depth using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Client Authenticate Depth box, type a whole number.
  4. Click Done.

To configure authentication depth from the command line

To configure authentication depth from the command line, use the authenticate depth argument with the bigpipe proxy command, as follows:

b proxy <ip>:<service> [clientssl] authenticate depth <num>

Inserting headers into HTTP requests

You can configure the SSL proxy to insert several kinds of headers into an HTTP client request. They are:

  • A custom HTTP header
  • Cipher specification
  • Client certificate fields
  • Client session IDs

    An example of when you might want to insert a header into an HTTP request is when the proxy is configured to request, rather than require, a certificate during client authentication. Because client authentication always succeeds in this case, despite status of the certificate, you might want the proxy to insert information into the HTTP request about the client certificate and the results of the verification attempt. Based on this information, the BIG-IP or a server could then perform actions such as redirecting the request to another server, or sending different content back to the client.

    If any of these header types is inserted into a valid HTTP request, the SSL proxy places the header so that it immediately follows the first line of the request. If more than one header is inserted, the headers are inserted in the order listed above.

    The following sections describe these header types.

Warning: Sometimes, the SSL proxy might not be able to insert a specified header, such as when a client request uses an non-standard HTTP method, or the request is pipelined. If you are making security decisions based on the value of these headers, you should exercise caution. Also, using the HTTP header insertion feature is not recommended when the SSL proxy targets a virtual server directly accessible to client connections. This is because it is not known whether the connection has come through the SSL proxy or directly from the client.

A custom HTTP header

When adding an SSL proxy, you can configure the proxy to insert a string of your choice into an HTTP request. This feature is useful for custom applications, as well as for securing content. For example, when the Outlook Web Access application detects the presence of a particular custom HTTP header, the application generates embedded URLs using the protocol HTTPS instead of HTTP.

A properly-formatted custom HTTP header is in the form of Field: Value. Note that an improperly-formatted custom HTTP header could cause the content server to fail in its handling of proxied requests.

To insert a custom header using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Insert HTTP Header String box, type a custom HTTP header in the form of Field: Value.
  4. Click Done.

To insert a custom header from the command line

To insert a custom header into an HTTP request using the command line, specify the header insert argument with the bigpipe proxy command, as follows:

b proxy <ip>:<service> header insert \"quoted string\"

A cipher specification

When adding an SSL proxy, you can configure the proxy to insert information about the negotiated SSL cipher into an HTTP request. When you configure this option, the SSL proxy inserts the actual cipher name, the SSL version, and the number of significant bits into the HTTP request.

A properly-formatted cipher specification header is in the form SSLClientCipher: [cipher] version=[version] bits=[bits], where [cipher], [version], and [bits] represent the actual cipher name, version, and number of significant cipher bits, respectively.

The ability to insert a cipher specification into a client request is useful for two primary reasons:

  • Inserting cipher information into an HTTP request can ensure that a client uses a specific cipher strength, thus enhancing the security of the SSL connection. Also, if the cipher strength of the client is unacceptable, you can direct them to a "cipher upgrade" path, rather than discarding the session altogether.
  • You can create rules that perform load balancing based on the cipher strength specified in the inserted header. Thus, using the HTTP request string variable http_header, you could create a rule such as that shown in Figure 4.45.

    Figure 4.45 A rule based on cipher strength specified in an HTTP header

     if (exists http_header "SSLClientCipher") {     
    if (http_header "SSLClientCipher" contains "bits=128") {
    use ( secure_pool )
    }
    else {
    redirect to "<https://%h/upgradebrowser.html>"
    }
    }
    else {
    redirect to "<https://%h/servererror.html>"
    }

To insert a cipher specification using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. Check the Insert Cipher check box.
  4. Click Done.

To insert a cipher specification from the command line

Specify the cipher insert argument with the bigpipe proxy command, as follows:

b proxy <ip>:<service> [clientssl] cipher insert <enable | disable>

Client certificate fields

When adding an SSL proxy, you can configure the proxy to insert into an HTTP request a header for each field of a client certificate. This feature is most useful when:

  • You have configured the SSL proxy to authenticate clients with the request option. For more information, see Configuring client-side authentication, on page 4-93.
  • You want to better control the load balancing of your network traffic. In this case, you can create a rule that performs load balancing according to the certificate information in the header. Figure 4.46 shows an example.

    Figure 4.46 A rule based on certificate status specified in an HTTP header

     if (exists http_header "SSLClientCertStatus") {     
    if (http_header "SSLClientCertStatus" contains "OK") {
    use ( authenticated_pool )
    }
    else {
    redirect to "<https://%h/authenticationfailed.html>"
    }
    }
    else {
    redirect to "<https://%h/servererror.html>"
    }

    Table 4.18 shows the client certificate headers that the SSL proxy can insert into a client request. For each header, the required format, description, and keyword is shown.

    Required formats of client certificate headers

    Header Name

    Required Format

    Description

    Certificate status

    SSLClientCertStatus: [status]

    The status of the client certificate. The value of [status] can be "NoClientCert", "OK", or "Error". If status is "NoClientCert", only this header is inserted into the request. If status is "Error", the error is followed by a numeric error code.

    Certificate version

    SSLClientCertVersion: [version]

    The version of the certificate.

    Certificate serial number

    SSLClientCertSerialNumber: [serial]

    The serial number of the certificate.

    Signature algorithm of the certificate

    SSLClientCertSignatureAlgorithm: [alg]

    The signature algorithm of the certificate.

    Issuer of the certificate

    SSLClientCertIssuer: [issuer]

    The issuer of the certificate.

    Certificate validity dates

    SSLClientCertNotValidBefore: [before]
    SSLClientCertNotValidAfter: [after]

    The validity dates for the certificate. The certificate is not valid before or after the dates represented by [before] and [after], respectively.

    Certificate subject

    SSLClientCertSubject: [subject]

    The subject of the certificate.

    Public key of the subject

    SSLClientCertSubjectPublicKey: [key]

    The type of public key type. The allowed types are "RSA ([size] bit)", "DSA", or "Unkown public key".

    The certificate itself

    SSLClientCert: [cert]

    The actual client certificate.

    MD5 hash of the certificate

    SSLClientCertHash: [hash]

    The MD5 hash of the client certificate.

To insert fields of a client certificate using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Insert Certificate box, check the appropriate check boxes.
  4. Click Done.

To insert fields of a client certificate from the command line

To insert headers for the fields of a client certificate into an HTTP request using the command line, specify the client cert insert argument with the bigpipe proxy command, as follows:

b proxy <ip>:<service> [clientssl] client cert insert <([versionnum] [serial] [sigalg] [issuer] [validity] [subject] [subpubkey] [whole] [hash])+ | disable>

Client session IDs

When adding an SSL proxy, you can configure the proxy to insert a client SSL session ID header into an HTTP request.

The header that is inserted can be one of two types:

  • A header in which the session ID is the session ID initially negotiated with the client for the corresponding TCP connection. The proper format of this header is SSLClientSessionID:X, where X represents the hexidecimal representation of the SSL session ID that was initially negotiated with the client for the corresponding TCP connection.
  • A header in which the session ID is the current session ID. The proper format of this header is SSLClientCurrentSessionID:X, where X represents the current SSL session ID.

    If you enable the insertion of session ID headers, but specify neither of these two types of session IDs, the SSL proxy inserts the session ID initially negotiated with the client.

To insert a session ID header using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Insert Client Session ID box, check either or both of the Initial and Current check boxes.
  4. Click Done.

To insert a session ID header from the command line

To insert a session ID header into an HTTP request using the command line, specify the sessionid insert argument with the bigpipe proxy command, as follows:

b proxy <ip>:<service> [clientssl] sessionid insert [initial] [current] [enable]

Note: One use of client session IDs is to enable SSL persistence. Note that SSL persistence should not be enabled on pools that load balance plain-text traffic, that is, traffic resulting from SSL proxies on which SSL termination is enabled.

Specifying SSL ciphers and protocol versions

For each SSL proxy, you can specify both the ciphers available for SSL connections, and the protocol versions that are not allowed.

When configuring ciphers and protocol versions, you must ensure that the ciphers and the protocol versions configured for the SSL proxy match those of the proxy's peer. That is, ciphers and protocol versions for the client-side SSL proxy must match those of the client, and ciphers and protocol versions for the server-side SSL proxy must match those of the server.

For example, a client might connect to and successfully establish an SSL connection to an SSL proxy that is configured to use both client-side and server-side SSL. After the client sends additional data (such as an HTTP request), the SSL proxy attempts to establish an SSL connection to a server. However, the SSL proxy might be configured to enable only 3DES ciphers for server-side SSL, and the servers might be configured to accept only RC4 ciphers. In this case, the SSL handshake between the SSL proxy and the server will fail because there are no common ciphers enabled. This results in the client connection being closed. If the client is using a browser, the user will likely receive an error message indicating that the web page failed to load.

The following sections describe how to configure cipher lists and protocol versions for the SSL proxy.

Configuring cipher lists

You can configure the list of SSL ciphers that are available for both client-side and server-side SSL connections. Whether using the Configuration utility or the bigpipe proxy command, you can specify a string to indicate the available list of SSL ciphers.

Note: To see the complete syntax for the cipher list, see the following OpenSSL web site: http://www.openssl.org/docs/apps/ciphers.html.

To configure a cipher list using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In either or both of the Client Cipher List String or Server Cipher List String boxes, type a properly-formatted string.
  4. Click Done.

To configure a cipher list from the command line

To specify a list of ciphers from the command line, specify the client-side cipher list or the server-side cipher list, as follows:

b proxy <ip>:<service> [clientssl] ciphers \"quoted string\"
b proxy <ip>:<service> serverssl ciphers \"quoted string\"

Tip: You can use the openssl ciphers command to test the validity of a cipher string.

Configuring invalid protocol versions

For both client-side and server-side SSL connections, you can specify SSL protocol versions that should not be allowed. You can declare up to two of the following three protocol versions to be invalid: SSLv2, SSLv3, and TLSv1. If no protocol versions are specified, all SSL protocol versions are allowed. If all three protocol versions are disallowed, no SSL sessions can be successfully negotiated.

To specify invalid protocol versions using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Client-side Connections Do Not Use These SSL Versions box or the Server-side Connections Do Not Use These SSL Versions box, check the appropriate check boxes.
  4. Click Done.

To specify invalid SSL protocol versions from the command line

Use the following syntax:

b proxy <ip>:<service> [clientssl] invalid [SSLv2] [SSLv3] [TLSv1]

b proxy <ip>:<service> serverssl invalid [SSLv2] [SSLv3] [TLSv1]

Specifying a list of trusted Certificate Authorities (CAs)

For both client-side and server-side SSL connections, you can specify trusted certificate authorities (CAs). The proxy can then use this CA specification to do the following:

  • Build certificate chains
  • Verify client certificates
  • Advertise to clients the CAs that the server trusts

    The following sections describe each of these uses of the trusted CAs list.

Building a certificate chain

Sometimes, a certificate that the SSL proxy uses to authenticate itself to a peer is signed by an intermediate CA that is not trusted by that peer. In this case, the proxy might need to build a certificate chain. The proxy allows you to build a certificate chain by specifying the name of a specific certificate chain file, either through the Configuration utility or from the command line. Note that the certificate files that make up the chain file must be in PEM format.

When attempting to access the specified chain file, the SSL proxy searches for the file in the following manner:

  1. The proxy looks to see that the specified file has a .chain extension.
  2. If the file specification does not include a .chain extension, the proxy appends that extension to the file and then searches for the file.
  3. If the file is not found, the proxy instead appends a .crt extension to the file and searches again.
  4. If the file is still not found, the proxy uses the same file name as that of the configured certificate. For example, the proxy might take the file name www.dot.com.crt, replace the .crt file name extension with the .chain extension, and search on the file name www.dot.com.chain.
  5. If unable to build the certificate chain using the preceding procedure, the proxy attempts to build the chain through certificate verification, described in the following section.

To build a certificate chain using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the box Client Chain File or Server Chain File, either select the name of a Trusted CAs file from the box, or type the name of a Trusted CA file.
  4. Click Done.

To build a certificate chain from the command line

To build a certificate chain from the command line, type the bigpipe proxy command with the appropriate arguments, as follows:

b proxy <ip>:<service> [clientssl] chain <clientside chain file name>

b proxy <ip>:<service> serverssl chain <serverside chain file name>

Verifying certificates

For both client-side and server-side SSL processing, you can configure the SSL proxy to verify certificates. Using either the Configuration utility or the bigpipe proxy command, you can specify both a Trusted CA file name and a Trusted CA path name, which the proxy then uses to verify client certificates.

Certificate verification is useful for the following reasons:

  • To authenticate the proxy's peer
  • To build a certificate chain to be sent to a peer, when the standard method for building a certificate chain fails

    The Trusted CA file.
    The Trusted CA file that you specify to configure certificate verification contains one or more certificates, in PEM format. If you do not specify a Trusted CA file, or the specified Trusted CA file is not accessible to the proxy, the proxy uses the default file name /config/bigconfig/ssl.crt/intermediate-ca.crt.

    The Trusted CA path.
    When searching a Trusted CA path, the proxy only examines those certificates that include a symbolic link to a certificate file. To ensure that each certificate has a link to its corresponding certificate file, you can configure the proxy to generate these symbolic links. If you do not specify a Trusted CA path, or the Trusted CA path is not accessible to the proxy, the proxy uses the default path name /config/bigconfig/ssl.crt/.

    Note that each certificate file should contain only one certificate. This is because only the first certificate in the file is used.

To specify the Trusted CA file and Trusted CA path using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the boxes Client Trusted CA File and Client Trusted CA Path, or Server Trusted CA File and Server Trusted CA Path, either select the name of a Trusted CAs file and path from the box, or type the name of a Trusted CA file or path.
  4. If you want to ensure that each certificate has a link to its corresponding file, check the Generate Symbolic Links for Client Trusted CAs Path check box. You should do this whenever you specify any of the path attributes.
  5. Click Done.

To specify the Trusted CA file and Trusted CA path from the command line

To specify the Trusted CA file and Trusted CA path from the command line, type the bigpipe proxy command, using the appropriate arguments, as follows:

b proxy <ip>:<service> [clientssl] ca file <clientside CA file name>

b proxy <ip>:<service> [clientssl] ca path <clientside CA path name>

b proxy <ip>:<service> serverssl ca file <serverside CA file name>

b proxy <ip>:<service> serverssl ca path <serverside CA path name>

Advertising a Trusted CA list

If you intend to configure the SSL proxy to require or request client certificates for authentication, you usually want the proxy to send to clients a list of CAs that the server is likely to trust. Although modern browsers automatically limit the user's selection of trusted CAs based on the proxy's configured list of trusted CAs, older browser versions may not have this capability.

The list of advertised trusted CAs can be different from the actual Trusted CA file configured as part of certificate verification.

To configure the proxy to send this list, you can specify a PEM-formatted certificate file that contains one or more CAs that a server trusts for client authentication. If no certificate file is specified, no list of trusted CAs is sent to a client.

To advertise a list of trusted CAs using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Client Certificate CA File box, select a file name from the box, or type the certificate CA file name.
  4. Click Done.

To advertise a list of trusted CAs from the command line

To configure the proxy to send a list of trusted CAs to a client from the command line, type the bigpipe proxy command, using the following arguments:

b proxy <ip>:<service> [clientssl] client cert ca <clientside client cert CA file name>

Rewriting HTTP redirection

When a client request is redirected from the HTTPS to the HTTP protocol, an SSL proxy can rewrite that redirection to HTTPS. (Specifically, this applies to HTTP responses 301, 302, 303, 305, and 307). This ability for the SSL proxy to rewrite HTTP redirections provides additional security by ensuring that client requests remain on a secure channel.

Another benefit of the ability to rewrite HTTP redirection pertains to IIS and Netscape web-server environments. Prior to this feature, a web server running IIS and Netscape would redirect a request incorrectly if the original request included a malformed directory name (without a trailing slash [/]). The ability for an SSL proxy to rewrite such a redirection solves this problem.

Note: If your web server is an IIS server, you can configure that server, instead of the SSL proxy, to handle any rewriting of HTTP redirections. To solve the problem described above, you can install a special BIG-IP file, redirectfilter.dll, on your IIS server. For more information, see Rewriting HTTP redirection, on page 4-41.

Note that the rewriting of any redirection only takes place in the HTTP Location header of the redirection response, and not in any content of the redirection.

This rewrite feature can rewrite the protocol name and the port number. Optionally, you can specify how the proxy should handle URIs during a rewrite.

Rewriting the protocol name

This feature allows the SSL proxy to rewrite the HTTP protocol name to HTTPS. For example, a client might send a request to https://www.sample.com/bar and be initially redirected to http://www.sample.com/bar/, which is a non-secure channel. If you want the client request to remain on a secure channel, you can configure the SSL proxy to rewrite the redirected URI to go to https://www.sample.com/bar/ instead. (Note the addition of the trailing slash.)

Rewriting the port number

In addition to being able to rewrite the protocol name from HTTP to HTTPS, the SSL proxy can also rewrite the port number of the redirected request. This happens in the case when the web server and/or SSL proxy are listening on a non-standard port, for example, when the client request is initially redirected to http://www.sample.com:8080/bar/. In this case, the SSL proxy rewrites not only the protocol name but the port number also. If, however, the SSL proxy is listening on the standard HTTPS port 443, then the SSL proxy removes the 8080 port number, without replacing it with 443.

Selecting URIs to rewrite

When configuring the SSL proxy to rewrite HTTP redirections, you can specify whether the proxy should rewrite only those URIs matching the URI originally requested by the client (minus the trailing slash), or all URIs. In the latter case, the SSL proxy always rewrites redirected-to URIs, and rewrites those URIs as if they matched the originally-requested URIs.

Table 4.19 shows examples of how redirections of client requests are transformed when the SSL proxy is listening on port 443 and the rewrite feature is enabled.

Examples of rewriting HTTP redirections with SSL proxy listening on port 443

Original redirection

Rewrite of Redirection with SSL Proxy Listening on Port 443

http://www.myweb.com/myapp/

https://www.myweb.com/myapp/

http://www.myweb.com:8080/myapp/

https://www.myweb.com/myapp/

Table 4.20 shows examples of how redirections of client requests are transformed when the SSL proxy is listening on port 4443 and the rewrite feature is enabled.

Examples of rewriting HTTP redirections with SSL proxy listening on port 4443

Original redirection

Rewrite of Redirection with SSL Proxy Listening on Port 4443

http://www.myweb.com/myapp/

https://www.myweb.com:4443/myapp/

http://www.myweb.com:8080/myapp/

https://www.myweb.com:4443/myapp/

To configure the rewrite feature using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Rewrite Redirects box, if you want to enable the feature, select either Matching or All. from the list. To disable the feature, do not select an option from the box. By default, the feature is disabled.
  4. Click Done.

To configure the rewrite feature from the command line

To configure this feature from the command line, type the bigpipe proxy command and specify the redirects rewrite argument, as follows:

b proxy <ip>:<service> redirects rewrite <<matching | all> [enable] | disable>

Configuring SSL session cache

For both client-side and server-side SSL connections, you can configure timeout and size values for the SSL session cache.

Because each proxy maintains a separate client-side SSL session cache, the client-side values can be configured on a per-proxy basis. For server-side SSL connections, however, the proxy maintains a single session cache. Thus, server-side session cache values must be configured globally.

Setting SSL Session Cache Timeout

Using either the Configuration utility or the bigpipe command, you can specify the number of usable lifetime seconds of negotiated SSL session IDs. The default timeout value for the SSL session cache is 300 seconds. Acceptable values are integers greater than or equal to 5.

Clients attempting to resume an SSL session with an expired session ID are forced to negotiate a new session.

Client-side timeout values. The client-side timeout values are configured on a per-proxy basis. Client-side timeout values can be set to zero, which represents no timeout.

Warning: If the timeout value for the client-side SSL session cache is set to zero, the SSL session IDs negotiated with that proxy's clients remain in the session cache until either the proxy is restarted, or the cache is filled and the purging of entries begins. Setting a value of zero can introduce a significant security risk if valuable resources are available to a client that is reusing those session IDs. It is therefore common practice to set the SSL session cache timeout to a length of time no greater than 24 hours and for even shorter periods.

Server-side timeout values. A single, server-side timeout value is configured globally. This timeout value cannot be set to zero. For optimal performance, the timeout value should be set to the minimum SSL session cache timeout value used by the servers to which the proxy makes server-side SSL connections. Under certain conditions, the proxy attempts to efficiently negotiate a new server-side SSL session prior to its expiration.

To set the timeout value of the client-side SSL session cache using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Client Session Cache Timeout box, type an integer greater than or equal to 5, or use the default value.
  4. Click Done.

To set the timeout value of the server-side SSL session cache using the Configuration utility

  1. In the navigation pane, click System.
  2. Click on the Advanced Properties tab.
  3. In the Server SSL Session Cache Timeout box, type zero or an integer greater than or equal to five, or use the default value.
  4. Click Done.

To set the timeout value of the client-side SSL session cache from the command line

To set the timeout value of the client-side SSL session cache, type the bigpipe proxy command with the appropriate arguments, as follows:

b proxy <ip>:<service> [clientssl] cache timeout <num>

To set the timeout value of the server-side SSL session cache from the command line

To set the timeout value of the server-side SSL session cache, type the bigpipe global command with the appropriate arguments. as follows:

b global sslproxy serverssl cache timeout <num>

Setting SSL session cache size

Using either the Configuration utility or the bigpipe command, you can specify the maximum size of the SSL session cache. The default value for the size of the SSL session cache is 20,000 entries.

The client-side values for the maximum size of the session cache are configured on a per-proxy basis. A single, server-side value for the maximum size of the session cache is configured globally.

To set the maximum size of the client-side SSL session cache using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Client Session Cache Size box, type an integer or use the default value.
  4. Click Done.

To set the maximum size of the client-side SSL session cache size from the command line

To set the maximum size of the client-side SSL session cache from the command line, type the bigpipe proxy command with the appropriate arguments, as follows:

b proxy <ip>:<service> [clientssl] cache size <num>

To set the maximum size of the server-side SSL session cache using the Configuration utility

  1. In the navigation pane, click System.
  2. Click the Advanced Properties tab.
  3. In the Server SSL Session Cache Size box, type an integer or use the default value.
  4. Click Done.

To set the maximum size of the server-side SSL session cache from the command line

To set the maximum size of the server-side SSL session cache from the command line, type the bigpipe global command with the appropriate arguments. as follows:

b global sslproxy serverssl cache size <num>

Configuring SSL proxy failover

If you have a redundant BIG-IP configuration, you can configure the SSL proxy to initiate an automatic failover in the event of a fatal cryptographic hardware module failure. A fatal failure is the condition where the BIG-IP, after having had an initial success communicating with the cryptographic accelerator module, subsequently receives a hardware error.

This option is configured globally, and by default is set to disable.

Note: In redundant configurations, connections handled by the SSL proxy are not mirrored, and therefore cannot be resumed by the peer unit upon failover.

To configure SSL proxy failover using the Configuration utility

  1. In the navigation pane, click System.
  2. Click the Advanced Properties tab.
  3. In the Failover on SSL Accelerator Failure box, check the Enable or Disable check box.
  4. Click Done.

To configure SSL proxy failover from the command line

To enable or disable the SSL proxy for failover from the command line, type the bigpipe global command with the appropriate arguments, as follows:

b global sslproxy failover <enable | disable>

Configuring SSL shutdowns

With respect to the shutdown of SSL connections, you can configure two global options on the BIG-IP:

  • Forcing clean SSL shutdowns
  • Allowing SSL sessions to resume after unclean shutdown

    The following sections describe these options.

Forcing clean SSL shutdowns

By default, the SSL proxy performs unclean shutdowns of all SSL connections, which means that underlying TCP connections are closed without exchanging the required SSL shutdown alerts. If you want to force the SSL proxy to perform a clean shutdown of all SSL connections, you can disable the default setting.

This feature is especially useful with respect to the Internet Explorer browser. Different versions of the browser, and even different builds within the same version of the browser, handle shutdown alerts differently. Some versions or builds require shutdown alerts from the server, while others do not, and the SSL proxy cannot always detect this requirement or lack of it. In the case where the browser expects a shutdown alert but the SSL proxy has not exchanged one (the default setting), the browser displays an error message.

To configure SSL shutdowns using the Configuration utility

  1. In the navigation pane, click System.
  2. Click the Advanced Properties tab.
  3. In the Force Unclean Shutdown Of All SSL Connections box, check or clear the check box.
  4. Click Done.

To configure SSL shutdowns from the command line

To configure SSL shutdowns from the command line, type the bigpipe global command with the appropriate arguments, as follows:

b global sslproxy unclean shutdown <enable | disable>

Resuming SSL sessions

In addition to forcing clean shutdowns, you can also configure the SSL proxy to prevent an SSL session from being resumed after an unclean shutdown. The default option is disable, which causes the SSL proxy to allow uncleanly shut down SSL sessions to be resumed. Conversely, when the enable option is set, the SSL proxy refuses to resume SSL sessions after an unclean shutdown.

To configure the SSL proxy to resume SSL sessions using the Configuration utility

You can allow the SSL proxy to resume, or prevent the SSL proxy from resuming, SSL sessions after an unclean shutdown.

  1. In the navigation pane, click System.
  2. Click the Advanced Properties tab.
  3. In the Do Not Resume Uncleanly Shutdown SSL Connections box, check or clear the check box.
  4. Click Done.

To configure the SSL proxy to resume SSL sessions from the command line

To allow the SSL proxy to, or prevent the SSL proxy from, resuming SSL sessions after an unclean shutdown from the command line, type the bigpipe global command with the appropriate arguments, as follows:

b global sslproxy strict resume <enable | disable>

Disabling ARP requests

By default, the BIG-IP responds to ARP requests for proxy address and sends a gratuitous ARP request for router table update. If you want to disable the proxy address for ARP requests, you must specify arp disable.

Adding a last hop pool to an SSL proxy

In cases where you have more than one router sending connections to a BIG-IP, connections are automatically sent back through the same router from which they were received when the auto_lasthop global variable is enabled, as it is by default. If the global auto_lasthop is disabled for any reason (for example, you may not want it for a virtual server), or if you want to exclude one or more routers from auto_lasthop you can direct your replies to the last hop router using a last hop pool. The lasthop pool will take precedence over auto_lasthop.

To configure a last hop pool, you must first create a pool containing the router inside addresses. After you create the pool, use the following syntax to configure a last hop pool for a proxy:

b proxy <ip>:<service> lasthop pool <pool_name>

For example, if you want to assign the last hop pool named ssllasthop_pool to the SSL proxy 11.12.1.200:443, type the following command:

b proxy 11.12.1.200:443 lasthop pool ssllasthop_pool

Deleting an SSL proxy

If you want to delete the SSL proxy 209.100.19.22:80, type a command such as the following:

b proxy 209.100.19.22:443 delete

The content converter proxy

The content converter proxy performs conversion of URLs to ARLs (Akamai Resource Locators). ARLs point to copies of URL targets that are stored on geographically nearby servers on the Akamai Freeflow NetworkTM for greater speed of access. The conversion from URL to ARL is performed whenever a client accesses a web page on a customer site containing a URL with an ARL counterpart, giving it the name on-the-fly content conversion. On-the-fly content conversion has the advantage that the HTML source does not need to be updated each time a new ARL is added.

Note: The content converter feature is usable only by customers of the Akamai Freeflow Network. In addition, the features required to configure this option are available only on the BIG-IP HA and Enterprise software versions.

Creating a content converter gateway

Configuring a content converter consists of two tasks. First, configure the Akamai on-the-fly conversion software for your network. Second, create the content converter gateway using the proxy command. (If the software is not configured first, the attempt to create a proxy will fail.)

To configure the on-the-fly conversion software

  1. On the BIG-IP, bring up the Akamai configuration file /config/akamai.conf in an editor like vi or pico.
  2. Under the heading [CpCode] you will find the text default=XXXXX. Replace the Xs with the CP code provided by your Akamai Integration Consultant. (If contacting your consultant, specify that you are using the BIG-IP on-the-fly akamaizer based on Akamai's 1.0 source code.) Example:

    default=773

  3. Under the heading [Serial Number] you will find the text staticSerialNumber=XXXXX. Replace the Xs with the static serial number provided by your Akamai Integration Consultant. Example:

    staticSerialNumber=1025

    Note: This value needs to be set only if the algorithm under [Serial Number] is set to static, as it is in the default file. If you choose to set the algorithm to deterministicHash or deterministicHashBounded, the static serial number is not applicable. If you are unsure what method to select, contact your Akamai Integration Consultant.

  4. Under the heading [URLMetaData] you will find the text httpGetDomains=XXXXX. Replace the Xs with domain name of the content to be converted. Example:

    httpGetDomains=www.f5.com

  5. Save and exit the file.

To create a content converter proxy using the Configuration utility

  1. In the navigation pane, click Proxies.
  2. Click the Add button.
  3. In the Proxy Type box, check the Akamaize check box.
  4. Configure the attributes you want to use with the proxy.
  5. Click Done.

To configure the content converter proxy at the command line

Use the following command-line syntax to create a content converter proxy:

b proxy <ip>:<service> [unit <unit_id>] target server|virtual <ip>:<service> akamaize enable

For example, from the command line you can create a proxy that looks like this:

b proxy 10.1.1.1:80 unit 1 target virtual 20.1.1.1:80 akamaize enable

When the content converter proxy is written in the /config/bigip.conf file, it looks like the example in Figure 4.47.

Figure 4.47 An example content converter proxy configuration

 proxy 10.1.1.1:http unit 1 {     
target virtual 20.1.1.1:http
akamaize enable
}

Additional proxy tasks

In addition to the proxy configuration options described in the Proxies section of this guide, you can perform the following tasks:

  • Disable or delete an SSL or content converter proxy
  • Disable or delete any VLAN that is mapped to a proxy
  • Display proxy configuration information

    The following three sections describe these tasks.

To disable or delete a proxy from the command line

You can disable or delete a proxy with the following syntax:

b proxy <ip>:<service> disable
b proxy <ip>:<service> delete

For example, if you want to disable the SSL proxy 209.100.19.22:443, you would type the following command:

b proxy 209.100.19.22:443 disable

If you want to delete the SSL proxy 209.100.19.22:443, you would type the following command:

b proxy 209.100.19.22:443 delete

To disable or delete a VLAN for a proxy from the command line

A proxy is mapped by default to all VLANs on the BIG-IP. To disable or delete any VLANs to which you do not want the proxy to be mapped, use the following syntax:

b proxy vlans <vlan_name> disable
b proxy vlans <vlan_name> delete

To display configuration information for a proxy from the command line

Use the following syntax to view configuration information for the specified proxy:

b proxy <ip>:<service> show

For example, if you want to view configuration information for the SSL proxy 209.100.19.22:443, type the following command:

b proxy 209.100.19.22:443 show

Nodes

Nodes are the network devices to which the BIG-IP passes traffic. A network device becomes a node when it is added as a member to a load balancing pool. You can display information about nodes and set properties for nodes.

The attributes you can configure for a node are listed in Table 4.21.

The attributes you can configure for a node.

Node Attributes

Description

Enable/Disable nodes

You can enable or disable nodes independent of a load balancing pool.

Set node up/down

You can set a node to up or down.

Connection limit

You can place a connection limit on a node.

Associate a node with a monitor

You can associate a health monitor with a node, creating an instance of that monitor.

Add a node as a member of a pool

You can add a node to a pool as a member. This allows you to use the load balancing and persistence methods defined in the pool to control connections handled by the node.

To enable and disable nodes and node addresses

A node must be enabled in order to accept traffic. When a node is disabled, it allows existing connections to time out or end normally and accept new connections only if they belong to an existing persistence session. (In this way a disabled node differs from a node that is set down. The down node allows existing connections to time out, but accepts no new connections.)

To enable a node or node address, use the node command with the enable option:

b node 192.168.21.1 enable

To disable a node or node address, use the node command with the disable option:

b node 192.168.21.1 disable

To mark nodes and node ports up or down

A node must be marked up in order to accept traffic. When a node is marked down it allows existing connections to time out but accepts no new connections.

To mark a node down, specify the node command with a node address and the down option. (Note that marking a node down prevents the node from accepting new connections. Existing connections are allowed to complete.)

b node 192.168.21.1 down

To mark a node up, use the node command with the up option:

b node 192.168.21.1 up

To mark a particular service down, specify the node command with a node address and port, and the down option. (Note that marking a port down prevents the port from accepting new connections. Existing connections are allowed to complete.)

b node 192.168.21.1:80 down

To mark a particular port up, use the node command with up option:

b node 192.168.21.1:80 up

To set connection limits for nodes

Use the following command to set the maximum number of concurrent connections allowed on a node:

b node <node_ip>[:<service>][...<node_ip>[:<service>]] limit <max conn>

Note that to remove a connection limit, set the <max conn> variable to 0 (zero). For example:

b node 192.168.21.1:80 limit 0

The following example shows how to set the maximum number of concurrent connections to 100 for a list of nodes:

b node 192.168.21.1 192.168.21.1 192.168.21.1 limit 100

To remove a connection limit, set the <max conn> variable to 0 (zero).

To associate a health monitor with a node

Use the following command to associate a health monitor with a node:

node <node> monitor use <monitor>

A monitor can be placed on multiple nodes and a node can have multiple monitors placed on it. To place a monitor on multiple nodes:

node <node_list> monitor use <monitor>

To place multiple monitors on a node:

node <node> monitor use <monitor1> and <monitor2>...

For more information on using the node command with health monitors, refer to Health monitors, on page 4-136.

To display status of all nodes

When you issue the node show command, the BIG-IP displays the node status (up or down, or unchecked), and a node summary of connection statistics, which is further broken down to show statistics by port. To display the status of a node from the command line, type the following command:

b node show

The report shows the following information:

  • Current number of connections
  • Total number of connections made to the node since last boot
  • Maximum number of concurrent connections since the last boot
  • Concurrent connection limit on the node
  • The total number of connections made to the node since last boot
  • Total number of inbound and outbound packets and bits

    Figure 4.48 shows the output of this command.

    Figure 4.48 Node status and statistics

     bigpipe node 192.168.200.50:20    
    NODE 192.168.200.50 UP
    | (cur, max, limit, tot) = (0, 0, 0, 0)
    | (pckts,bits) in = (0, 0), out = (0, 0)
    +- PORT 20 UP
    (cur, max, limit, tot) = (0, 0, 0, 0)
    (pckts,bits) in = (0, 0), out = (0, 0)

To display the status of individual nodes and node addresses

Use the following command to display status and statistical information for one or more node addresses:

b node 192.168.21.1 show

The command reads the status of each node address, the number of current connections, total connections, and connections allowed, and the number of cumulative packets and bits sent and received.

Use the following command to display status and statistical information for one or more specific nodes:

b node 192.168.21.1:80 show

To reset statistics for a node

Use the following command to reset the statistics for an individual node address:

b node [<node_ip>:<service>] stats reset

To add a node as a member of a pool

You can add a node as a member of a load balancing pool. For detailed information about how to do this, see Member specification, on page 4-5.

Services

Services are the standard Internet applications supported by BIG-IP, such as HTTP, HTTPs, FTP, and POP3. Each service is known by its name and also by its well-known or reserved port number such as 80 or 443. (Specifically, a service is any valid service name in the /etc/services file or any valid port number between 0 and 65535.) The bigpipe service command allows you to enable and disable network traffic on services, and also set connection limits and timeouts. You can use the service name or the port number for the <service> parameter. Note that the settings you define with this command control the service for all virtual servers that use it. By default, access to all services is disabled.

Tip: Virtual servers using the same service actually share a port on the BIG-IP. This command is global, you only need to open access to a port once; you do not need to open access to a port for each instance of a virtual server that uses it.

Table 4.22

The attributes you can configure for a service.

Attributes

Description

Allow access to services

As a security measure, all services are locked down on the BIG-IP. In order for the BIG-IP to load balance traffic, you must enable access to the service on which the BIG-IP will receive traffic.

Connection limits

You can define a connection limit for a service so that a flood of connections does not overload the BIG-IP.

Set idle connection timeouts

You can set the idle connection timeout to close idle connections.

To allow access to services using the Configuration utility

Any time you create a virtual server and define a port or service with the Configuration utility, the port or service is automatically enabled.

To allow access to services from the command line

Using the bigpipe service command, you can allow access to one or more services at a time.

b service <service>...<service> <protocol> [tcp|udp] enable

For example, in order to enable HTTP (service 80) and Telnet (service 23) services, you can type the following bigpipe service command:

b service 80 23 443 tcp enable

To set connection limits on services

Use the following syntax to set the maximum number of concurrent connections allowed on a service. Note that you can configure this setting for one or more services.

b service <service> [...<service>] limit <max conn>

To turn off a connection limit for one or more services, use the same command, setting the <max conn> parameter to 0 (zero) like this:

b service <service> [...<service>] limit 0

To enable or disable TCP for services

You can enable or disable TCP for specific services. The default setting for all services is enabled. Use the following syntax to disable TCP for one or more services:

b service <service> [...<service>] tcp disable

To re-enable TCP, use this syntax:

b service <service> [...<service>] tcp enable

To enable or disable UDP for services

You can enable or disable UDP for specific services. The default setting for all services is disabled. Use the following syntax to enable UDP for one or more services:

b service <service> [...<service>] udp enable

To disable UDP, use this syntax:

b service <service> [...<service>] udp disable

To set the idle connection timeout for TCP traffic

To set the TCP timeout on one or more services, where the <seconds> parameter is the number of seconds before an idle connection is dropped, use the following syntax:

b service <service> [<service>...] timeout tcp <seconds>

For example, the following command sets the TCP timeout to 300 seconds for port 53:

b service 53 timeout tcp 300

To turn off TCP timeout for a service, use the above command, setting the <seconds> parameter to zero:

b service 53 timeout tcp 0

To set the idle connection timeout for UDP traffic

To set the UDP timeout on one or more services, where the <seconds> parameter is the number of seconds before an idle connection is dropped, use the following syntax:

b service <service> [<service>...] timeout udp <seconds>

For example, the following command sets the UDP timeout to 300 seconds for port 53:

b service 53 timeout udp 300

To turn off UDP timeout for a service, use the above command, setting the <seconds> parameter to zero:

b service 53 timeout udp 0

To display service settings

Use the following command to display the settings for all services:

b service show

Use the following syntax to display the settings for a specific service of services:

b service <service> [...<service>] show

For example, the command b service http show displays the output shown in Figure 4.49.

Figure 4.49 Sample output of the bigpipe service show command

 SERVICE 80 http tcp enabled timeout 1005 udp disabled timeout 60
(cur, max, limit, tot, reaped) = (0, 0, 0, 0, 0)
(pckts,bits) in = (0, 0), out = (0, 0)

Address translation: SNATs, NATs, and IP forwarding

The BIG-IP uses address translation and forwarding in various ways to make nodes accessible that would otherwise be hidden on its internal VLAN.

  • A virtual server translates the destination address of an inbound packet from its own address (the virtual server's) to the address of the node to which it load balances the packet. It then translates the origin address of the reply back to its own address so the originating host will not try to address the member node directly. This translation is basic to the way the virtual server works in most configurations and it is enabled by default.
  • You can configure a SNAT (Secure Network Address Translation) or NAT (Network Address Translation) to give a node that is a member of a load balancing pool a routable address as an origin address for purposes of generating its own outbound traffic. A SNAT can be configured manually, or automatically using the SNAT auto-map feature.
  • You can configure a forwarding virtual server to expose selected nodes to the external network.
  • You can configure IP forwarding globally to expose all internal nodes to the external network

    For more information on enabling address translation for virtual servers, refer to Virtual servers, on page 4-69. The following sections describe how to configure SNATs, NATs, and IP forwarding.

SNATs

A secure network address translation (SNAT) provides a routable alias IP address that a node can use as its source IP address when making connections to clients on the external network. Unlike a network translation address (NAT), a SNAT does not accept inbound traffic, and this is where its security lies. When you define a SNAT, you can use it in any of the following ways:

  • Assign a single SNAT address to a single node
  • Assign a single SNAT address to multiple nodes
  • Enable a SNAT for a VLAN

    Note that a SNAT address does not necessarily have to be unique; for example, it can match the IP address of a virtual server.

    The attributes you can configure for a SNAT are shown in Table 4.23.

    The attributes you can configure for a SNAT

    Attributes

    Description

    Global SNAT properties

    Before you configure a SNAT, you can configure global properties for all SNATs on the BIG-IP. Configuring global properties for a SNAT is optional.

    Manual SNAT mapping

    You can define a specific translation address to be mapped to an individual host.

    SNAT automapping

    You can configure BIG-IP to automatically map a translation address.

Setting SNAT global properties

The SNAT feature supports three global properties that apply to all SNAT addresses:

  • Connection limits
    The connection limit applies to each node that uses a SNAT.
  • TCP idle connection timeout
    This timer defines the number of seconds that TCP connections initiated using a SNAT address are allowed to remain idle before being automatically disconnected.
  • UDP idle connection timeout
    This timer defines the number of seconds that UDP connections initiated using a SNAT address are allowed to remain idle before being automatically disconnected. This value should not be set to 0.

To configure SNAT global properties using the Configuration utility

  1. In the navigation pane, click SNATs.
    The SNATs screen opens.
  2. In the Connection Limit box, type the maximum number of connections you want to allow for each node using a SNAT.
  3. To turn connection limits off, set the limit to 0.
  4. In the TCP Idle Timeout box, type the number of seconds that TCP connections initiated by a node using a SNAT are allowed to remain idle.
  5. In the UDP Idle Timeout box, type the number of seconds that UDP connections initiated by a node using a SNAT are allowed to remain idle. This value should not be set to 0.
  6. Click the Apply button.

To configure SNAT global properties from the command line

Configuring global properties for a SNAT requires that you enter three bigpipe commands. The following command sets the maximum number of connections you want to allow for each node using a SNAT.

b snat limit <value>

The following commands set the TCP and UDP idle connection timeouts:

b snat timeout tcp <seconds>

b snat timeout udp <seconds>

When adding a default SNAT for an active-active configuration, see Adding automapped SNATs for active-active configurations, on page 4-128.

Configuring a SNAT manually

Once you have configured the SNAT global properties, you can manually configure SNAT address mappings. When you map a SNAT manually, you specify a particular translation IP address that you want the BIG-IP to assign from any of the following:

  • One or more specified node addresses
  • One or more VLANs
  • A combination of specified node addresses and VLANs
  • All node addresses (known as a default SNAT)

    Note that a SNAT address does not necessarily have to be unique; for example, it can match the IP address of a virtual server. A SNAT address cannot match an address already in use by a NAT or another SNAT address.

    The following sections describe how to add a default SNAT and how to add a SNAT manually for individual node addresses, VLANs, or a combination of both.

Adding a default SNAT manually

If you do not want to configure a SNAT for each individual node, you can manually create a default SNAT. When you add a default SNAT, you are directing the BIG-IP to map every node on the internal network to a default translation address.

Note: The following procedures do not apply to active-active configurations. For information on how to add a default SNAT for an active-active configuration, see Adding automapped SNATs for active-active configurations, on page 4-128.

To add a default SNAT manually using the Configuration utility

  1. In the navigation pane, click NATs.
    The NATs screen displays.
  2. Click the SNATs tab.
  3. Click the Add Default button.
    The Add Default SNAT screen opens.
  4. In the Translation Address field, select the IP button, and type the IP address that you want BIG-IP to assign as a translation address.
  5. Click Done.

To add a default SNAT manually from the command line

Use the following syntax to manually define the default SNAT. If you use the netmask parameter and it is different from the external interface default netmask, the command sets the netmask and derives the broadcast address.

b snat map default to <snat_ip> \

[vlan <vlan_name> disable|enable] \

[netmask <ip>]

Adding a SNAT for individual node addresses and VLANs

If you do not want to add a default SNAT, you can add a SNAT for any individual node address or VLAN. The following procedures describe how to manually add a SNAT.

To manually add a SNATusing the Configuration utility

The Configuration utility allows you to define one SNAT for one or more original IP addresses, where the original IP address can be either a specific node address or a VLAN name.

  1. In the navigation pane, click NATs.
    The NATs screen displays.
  2. Click the SNATs tab.
  3. Click the Add button.
    The Add SNAT screen opens.
  4. In the Translation Address field, select the IP button, and type the IP address that you want BIG-IP to assign as a translation address.
  5. Type each node's IP address into the Original Address: box and move the address to the Current List: box, using the right arrows (>>). Also, verify that the option choose appears in the VLAN box.
  6. If you want to map the translation address from a VLAN, select the VLAN name from the VLAN box and move the selection to the Current List: box, using the right arrows (>>).
  7. Click Done.

To add a manual SNAT from the command line

The bigpipe snat command defines one SNAT for one or more original IP addresses, where the original IP address can be either a specific node address or a VLAN name. To manually add a SNAT using the bigpipe snat command, use the following syntax.

b snat map <orig_ip>... to <snat_ip>

For example, to define a SNAT for two specific nodes:

b snat map 192.168.75.50 192.168.75.51 to 192.168.100.10

To define a SNAT for two internal VLANs:

b snat map internal1 internal2 to 192.168.102.11

To define a SNAT for both a node address and a VLAN:

b snat map 192.168.75.50 internal2 to 192.168.100.12

To create individual SNAT addresses

Use the following command-line command-line syntaxsyntax to create a SNAT mapping:

b snat map <orig_ip> [...<orig_ip>] to \
<snat_ip> [vlan <vlan_name> disable | enable] [unit <unit ID>] [netmask <ip>]

If the netmask is different from the external interface default netmask, the command sets the netmask and derives the broadcast address.

Configuring SNAT automapping

BIG-IP includes a feature called SNAT automapping. When you map a SNAT automatically, rather than manually, you enable the BIG-IP to choose the translation IP address. You also enable the BIG-IP to map that translation address from any of the following:

  • One or more specified node address
  • One or more VLANs
  • A combination of specific node addresses and VLANs
  • All node addresses (known as a default SNAT)

    SNAT automapping eliminates the need for you to specifically define an IP address as the translation address.

    The SNAT automapping feature is useful in the following cases:

  • Where there is a need to ensure that outbound traffic returning through ISPs or NAT-less firewalls returns through the same ISP or firewall.
  • Where a traditional single SNAT address would quickly exhaust the number of ephemeral ports available. As long as there is more than one eligible self IP address, SNAT automapping can increase the number of simultaneous connections possible by using the same ephemeral port on multiple addresses.
  • When the equivalent of a default SNAT, that is, a SNAT that continues to work in the event of a failure in one BIG-IP, is required for BIG-IP units in active-active mode. (The conventional default SNAT does not work in active-active mode.)

Adding an automapped default SNAT

The BIG-IP allows you to take advantage of the SNAT automapping feature when adding a default SNAT. When you add a default SNAT, you are enabling the BIG-IP to map every node on the internal network to a default translation address. With the automapping feature, you do not need to define a specific translation address to which all nodes on the network will be mapped.

To add the automapped default SNAT using the Configuration utility

  1. In the navigation pane, click NATs.
    The NATs screen displays.
  2. Click the SNATs tab.
  3. Click the Add Default button.
    The Add Default SNAT screen opens.
  4. Click the Automap button.
  5. Click Done.

To add the automapped default SNAT from the command line

To add a default SNAT using the automapping feature, type the bigpipe snat command as follows:

b snat map default to auto

Note: A default SNAT cannot be added for an active-active configuration. For more information, see Adding automapped SNATs for active-active configurations, on page 4-128.

Adding automapped SNATs for standard (active-standby) configurations

When enabling SNAT automapping for VLANs, the BIG-IP handles the SNATs in the following ways:

  • If you create a SNAT on an internal VLAN, a SNAT is performed on any connection made from that VLAN.
  • If you enable snat automap on a single self IP address, the translation address is that self IP address.
  • If you enable snat automap on more than one self IP address, (implying more than one IP network), the following rules apply:

    • If the connection is handled by a non-forwarding virtual server, the translation address is the self IP address that matches the IP network of the node selected by load balancing.
    • If the connection is handled by a forwarding virtual server or no virtual server, the translation address is the self IP address that matches the IP network of the next hop to the destination.
    • If there are no self addresses that match the IP network of the node or the next hop, any self IP address on the VLAN is eligible.

      To add a SNAT using the automapping feature, you must complete two procedures:

  • Enable the snat automap attribute on any self IP addresses.
  • Add the SNAT, specifying the Automap feature.

    The following sections explain these procedures.

To enable the snat automap attribute on a self IP address from the command line

When you enable automapping to add a SNAT, the translation address that the BIG-IP maps to an individual node or a VLAN is the self IP address. Thus, prior to enabling automapping for the node or VLAN, you must enable the snat automap attribute on the self IP address. This is done from the command line, using the following syntax:

b self <self IP address> snat automap enable

For example, if you have the two self IP addresses 192.168.217.14 and 192.168.217.15, the following commands enable the snat automap attribute on those self IP addresses:

b self 192.168.217.14 snat automap enable

b self 192.168.217.15 snat automap enable

Later, when you add a SNAT using automapping, the BIG-IP maps either of those self IP addresses to the original node (or VLAN) that you specify.

As another example, the following command enables the snat automap attribute on the self IP address 10.0.0.1, for the VLAN named external:

b self 10.0.0.1 vlan external snat automap enable

For more information, see To add an automapped SNAT from the command line, on page 4-128.

To add an automapped SNAT using the Configuration utility

The Configuration utility allows you to define one SNAT for one or more original IP addresses, where the original IP address can be either a specific node address or a VLAN name.

  1. In the navigation pane, click NATs.
    The NATs screen displays.
  2. Click the SNATs tab.
  3. Click the Add button.
    The Add SNAT screen opens.
  4. In the Translation Address dialog area, click the Automap button.
  5. If you want to map the translation address from one or more specific nodes, enter each node's IP address into the Original Address: box and move the address to the Current List: box, using the right arrows (>>). Also, verify that the option choose appears in the VLAN box.
  6. If you want to map the translation address to a VLAN, select the VLAN name from the VLAN box and move the selection to the Current List: field, using the right arrows (>>).
  7. Click Done.

To add an automapped SNAT from the command line

The bigpipe snat command defines one SNAT for one or more original IP addresses, where the original IP address can be either a specific node address, or a VLAN name.

For example, to define an automapped SNAT for two individual node addresses:

b snat map 10.1.1.1 10.1.1.2 to auto

In the preceding example, the translation address to which the nodes 10.1.1.1 and 10.1.1.2 will be mapped is the self IP address, assuming that you enabled the snat automap attribute on that self IP address prior to using the bigipipe snat command. For more information, see To enable the snat automap attribute on a self IP address from the command line, on page 4-127.

To define an automapped SNAT for a VLAN named internal:

b snat map internal to auto

To define an automapped SNAT for both a node address and a VLAN:

b snat map 192.168.75.50 internal2 to auto

Note: When adding automapped SNATs, you must also enable the snat automap attribute on the self IP address that the BIG-IP will use as the translation address. For more information, see To enable the snat automap attribute on a self IP address from the command line, on page 4-127.

Adding automapped SNATs for active-active configurations

In the case where you want to add a default SNAT for an active-active configuration, you cannot create the standard default SNAT described earlier in this section. Instead, you must create the equivalent of a default SNAT.

To create the equivalent of a default SNAT, it is necessary to assign each unit its own floating self IP address on the external VLAN. This is done for the same reason that separate aliases are assigned to the internal network as part of routine active-active setup. (See Configuring an active-active system, on page 6-11.) Because you already have a floating self IP address for the external interface that is configured as belonging to unit one on unit one and unit two on unit two, use the following procedure to create two unit-specific IP aliases is as follows.

To create two unit-specific SNATs

  1. On unit one, ensure that two floating self IP addresses are configured for unit one. For example:

    b self 11.11.11.3 vlan internal unit 1 floating enable

    b self 172.16.16.3 vlan external unit 1 floating enable

  2. Also on unit one, ensure that two floating self IP addresses are configured for unit two. For example:

    b self 11.11.11.4 vlan internal unit 2 floating enable

    b self 172.16.16.4 vlan external unit 2 floating enable

  3. Ensure that unit two has all of these self IP addresses by using the config sync command to synchronize the changes to unit two:

    b config sync all

  4. Set up SNAT automapping as you would for an active/standby system, but enable both external aliases:

    b self 172.16.16.3 vlan external snat automap enable

    b self 172.16.16.4 vlan external snat automap enable

    b snat map internal to auto

ISPs and NAT-less firewalls

BIG-IP handles ISPs and NAT-less firewalls in the following manner:

  • If multiple external interfaces are available, the inside addresses of the firewalls in the load balancing pool may each be connected to different interfaces and assigned to different VLANs.
  • A SNAT is then enabled on each VLAN.
  • A SNAT must also be enabled on the internal VLAN.

    For example, if the internal VLAN is named internal and the external VLANs are named external1 and external2, you would type the following commands:

    b snat internal to auto

    b snat external1 to auto

    b snat external2 to auto

  • If multiple external interfaces are not available, the ISP routers or firewalls are assigned to different IP networks. This will already be the case for ISPs.
  • For firewalls, the separate IP address ranges must be established on the inside and outside interfaces of each firewall. The separate networks are then assigned separate self addresses, for example, 10.0.0.1 and 11.0.0.1.

    Thus, if the internal and external VLANs are named internal and external, you would type the following commands:

    b self 10.0.0.1 vlan external snat automap enable

    b self 11.0.0.1 vlan external snat automap enable

    b snat internal to auto

Disabling SNATs and NATs for a pool

When configuring a pool, you can specifically disable SNAT or NAT translations on any connections that use that pool. By default, this setting is enabled. For information on how to disable SNAT and NAT connections for a pool, see Disabling SNAT and NAT connections, on page 4-45.

Disabling ARP requests

By default, the BIG-IP responds to ARP requests for the SNAT address and sends a gratuitous ARP request for router table update. If you want to disable the SNAT address for ARP requests, you must specify arp disable.

Additional SNAT configuration options

The following procedures allow you to further configure SNATs.

To delete SNAT addresses

The following syntax deletes a specific SNAT:

b snat <snat_ip> | default delete

To show SNAT mappings

The following bigpipe command shows mappings:

b snat [<snat_ip> ...] show

b snat default show

The value of the <snat_ip> variable can be either the translated or the original IP address of the SNAT, or a SNAT-enabled VLAN name.

The following command shows the current SNAT connections:

b snat [<snat_ip> ...] dump [ verbose ]

b snat default dump [ verbose ]

The optional verbose keyword provides more detailed output.

The following command prints the global SNAT settings:

b snat globals show

To enable mirroring for redundant systems

The following example sets SNAT mirroring for all SNAT connections originating at 192.168.225.100:

b snat 192.168.225.100 mirror enable

To clear statistics

You can reset statistics by node address, SNAT address, or VLAN name. Use the following syntax to clear all statistics for one or more nodes:

b snat <node_ip> ... stats reset

Use the following syntax to clear all statistics for one or more SNAT addresses:

b snat <snat_ip> ... stats reset

Use the following command to reset the statistics to zero for the default:

b snat default stats reset

NATs

A network translation address (NAT) provides a routable alias IP address that a node can use as its source IP address when making or receiving connections to clients on the external network. (This distinguishes it from a SNAT, which can make outbound connections but refuses inbound connections.) You can configure a unique NAT for each node address included in a virtual server mapping.

Note: Note that NATs do not support port translation, and are not appropriate for protocols that embed IP addresses in the packet, such as FTP, NT Domain or CORBA IIOP. You cannot define any NATs if you configure a default SNAT.

Table 4.24 shows the attributes you can configure for a NAT.

The attributes you can configure for a NAT

NAT Attributes

Description

Original address

The original address is the node IP address of a host that you want to be able to connect to through the NAT.

Translated address

The translated address is an IP address that is routable on the external network of the BIG-IP. This IP address is the NAT address.

Disabled VLAN list

VLANs to which the NAT is not to be mapped can be explicitly disabled, as when there is more than one internal VLAN.

Unit ID

You can specify a unit ID for a NAT if the BIG-IP is configured to run in active-active mode.

The IP addresses that identify nodes on the BIG-IP internal network need not be routable on the external network. This protects nodes from illegal connection attempts, but it also prevents nodes (and other hosts on the internal network) from receiving direct administrative connections, or from initiating connections to clients, such as mail servers or databases, on the BIG-IP external interface.

Using network address translation resolves this problem. Network address translations (NATs) assign to a particular node a routable IP address that the node can use as its source IP address when connecting to servers on the BIG-IP external interface. You can use the NAT IP address to connect directly to the node through the BIG-IP, rather than having the BIG-IP send you to a random node according to the load balancing mode.

Note: In addition to these options, you can set up forwarding virtual servers that allow you to selectively forward traffic to specific addresses. The BIG-IP maintains statistics for forwarding virtual servers.

Defining a network address translation (NAT)

When you define standard network address translations (NATs), you need to create a separate NAT for each node that requires a NAT. You also need to use unique IP addresses for NAT addresses; a NAT IP address cannot match an IP address used by any virtual or physical servers in your network. You can configure a NAT with the Configuration utility or from the command line.

To configure a NAT using the Configuration utility

  1. In the navigation pane, click NATs.
    The NATs screen opens.
  2. Click the Add button.
    The Add NAT screen opens.
  3. In the Add NAT screen, fill in the fields to configure the NAT. For additional information configuring a NAT, click the Help button.

To configure a NAT from the command line

A NAT definition maps the IP address of a node <orig_addr> to a routable address on the external interface <trans_addr>. Use the following syntax to define a NAT:

b nat <orig_addr> to <trans_addr> [vlans <vlan_list> disable | enable] [unit <unit ID>]

The vlans <vlan_list> parameter is used to disable the specified VLANs for translation. By default, all VLANs are enabled.

Use the unit <unit ID> parameter to specify the BIG-IP to which this NAT applies in an active-active redundant system.

The following example shows a NAT definition:

b nat 10.10.10.10 to 10.12.10.10

To delete NATs

Use the following syntax to delete one or more NATs from the system:

b nat <orig_addr> [...<orig_addr>] delete

To display status of NATs

Use the following command to display the status of all NATs included in the configuration:

b nat show

Use the following syntax to display the status of one or more selected NATs (see Figure 4.50).

b nat <orig_addr> [...<orig_addr>] show

Figure 4.50 Output when you display the status of a NAT

 NAT { 10.10.10.3 to 9.9.9.9 }    
(pckts,bits) in = (0, 0), out = (0, 0)
NAT { 10.10.10.4 to 12.12.12.12
netmask 255.255.255.0 broadcast 12.12.12.255 }
(pckts,bits) in = (0, 0), out = (0, 0)

To reset statistics for a NAT

Use the following command to reset the statistics for an individual NAT:

b nat [<orig_addr>] stats reset

Use the following command to reset the statistics for all NATs:

b nat stats reset

Disabling SNATs and NATs for a pool

When configuring a pool, you can specifically disable any SNAT or NAT connections that use that pool. By default, this setting is enabled. For information on how to disable SNAT and NAT connections for a pool, see Disabling SNAT and NAT connections, on page 4-45.

Disabling ARP requests

By default, the BIG-IP responds to ARP requests for the NAT address and sends a gratuitous ARP request for router table update. If you want to disable the NAT address for ARP requests, you must specify arp disable.

Additional restrictions

The nat command has the following additional restrictions:

  • The IP address defined in the <orig_addr> parameter must be routable to a specific server behind the BIG-IP.
  • You must delete a NAT before you can redefine it.
  • The interface for a NAT can only be configured when the NAT is first defined.

IP forwarding

IP forwarding is an alternate way of allowing nodes to initiate or receive direct connections from the BIG-IP external network. IP forwarding directly exposes all of the node IP addresses to the external network, making them routable on that network. If your network uses the NT Domain or CORBA IIOP protocols, IP forwarding is an option for direct access to nodes.

Tip: Use of SNATs and NATs, as well as forwarding pools and forwarding virtual servers, is preferable to global IP forwarding. For more information on forwarding pools and forwarding virtual servers, see Forwarding pools, on page 4-47 and Forwarding virtual servers, on page 4-76.

IP forwarding is a global setting that exposes the IP address of all internal nodes to the BIG-IP external network, and clients can use it as a standard routable address. When you turn IP forwarding on, the BIG-IP acts as a router when it receives connection requests for node addresses. You can use the IP filter feature to implement a layer of security that can help protect your nodes.

Table 4.25 shows options associated with IP forwarding.

The attributes you can configure for IP forwarding

Option

Description

Enable IP forwarding globally

You can enable IP forwarding globally for the BIG-IP, either with the Configuration utility or by turning on the sysctl variable net.inet.ip.forwarding. To protect your nodes with this feature, we recommend that you use IP filters, which add a layer of security.

Address routing issues

If you enable IP forwarding, you need to route packets to the node addresses through the BIG-IP.

Configure the forwarding attribute for a pool

Instead of enabling IP forwarding globally or creating a forwarding virtual server, you can create a pool with no members that forwards traffic instead of load balancing it. For more information, see Forwarding pools, on page 4-47.

Enable IP forwarding for a virtual server

Instead of enabling IP forwarding globally, you can create a special virtual server with IP forwarding enabled. For information on creating a forwarding virtual server, see Forwarding virtual servers, on page 4-76.

Enabling IP forwarding globally

IP forwarding is a global property of the BIG-IP system. To set up IP forwarding globally, you need to complete two tasks:

  • Turn IP forwarding on
    The BIG-IP uses a system control variable to control IP forwarding, and its default setting is off.
  • Verify the routing configuration
    You probably have to change the routing table for the router on the BIG-IP external network. The router needs to direct packets for nodes to the BIG-IP, which in turn directs the packets to the nodes themselves.

To set global IP forwarding using the Configuration utility

  1. In the navigation pane, click System.
    The Network Map screen opens.
  2. Click the Advanced Properties tab.
    The Advanced Properties screen opens.
  3. Check the Allow IP Forwarding box.
  4. Click Apply.

To set global IP forwarding from the command line

Use the bigpipe global ip_forwarding command to set the variable. The default setting for the variable is disabled. You should change the setting to enabled:

b global ip_forwarding enabled

Addressing routing issues for IP forwarding

Once you turn on IP forwarding, you probably need to change the routing table on the default router. Packets for the node addresses need to be routed through the BIG-IP. For details about changing the routing table, refer to your router's documentation.

Configuring the forwarding attribute for a pool

You can configure IP forwarding so that it is done by a pool, rather than globally by the BIG-IP or by an individual virtual server. For more information, see Forwarding pools, on page 4-47.

Enabling IP forwarding for a virtual server

You can configure IP forwarding so that it is done by a virtual server, rather than globally by the BIG-IP or by a specific pool. For more information, see Forwarding virtual servers, on page 4-76.

Health monitors

Health monitors verify connections and services on nodes that are members of load balancing pools. The monitor checks the node at a set interval. If the node does not respond within a specified timeout period, the node is marked down and traffic is no longer directed to it.

By default, an icmp (Internet Control Message Protocol) monitor is associated with every node that is a member of a load balancing pool. This monitor is of the simplest type, checking only the node address and checking only for a ping response. To change the interval and timeout values of this default check, or to check specific services on a node, you need to configure a custom monitor or monitors to add to the default monitor. The BIG-IP provides a variety of service-specific monitors in template form. Some of these monitors are usable as is (assuming their default values are acceptable) and may be put in service simply by associating them with the nodes to be monitored. In most cases, however, the template is used purely as a template for configuring custom monitors. Configuring custom monitors and placing them in service is a three-step process:

  • Selecting the template
  • Configuring the monitor from the template
  • Associating the monitor with the node or nodes

For example, for the default icmp monitor, we selected the icmp monitor template, as shown in Figure 4.51.



Figure 4.51 The icmp monitor template

 monitor type icmp {
interval 5
timeout 16
dest *
}

The icmp monitor template has three attributes, interval, timeout, and dest, each with a default value. (All monitor templates have these three basic attributes. Other monitor templates have additional attributes as required by the service type.) These attributes are inherited by the custom monitor when it is configured and can be left at their default values or assigned new values as required.

For the default monitor, template icmp is used as is, that is, as monitor icmp with its default attribute values. To change any of these default values, you need to create a custom monitor based upon icmp, for example, my_icmp. Only the values that are actually to be changed would need to be specified in the definition of the custom monitor. Therefore, if you wanted to change the timeout values only, you define the custom monitor as follows:

b monitor my_icmp '{ use icmp timeout 20 }'

This creates a new monitor in /config/bigip.conf, as shown in Figure 4.52. You can display this monitor using the command b monitor my_icmp show.

Figure 4.52 Custom icmp monitor

 monitor my_icmp{
#type icmp
"icmp"
interval 5
timeout 20
}

Once the custom monitor exists, you associate it with a node or nodes using the Configuration utility or the bigpipe node command as follows.

b node 11.11.11.1 11.11.11.2 11.11.11.3 monitor use my_icmp

Note: The nodes are identified by IP address only. icmp can ping addresses only, not specific ports on addresses. This creates three instances of monitor my_icmp, one for each address. You can display the instances using the command b node monitor my_icmp show.

Figure 4.53 Output for the command b node monitor show

 +- NODE  ADDRESS 11.11.11.1   UP
| |
| +- icmp
| 11.11.11.1 up enabled
|
+- NODE ADDRESS 11.11.11.2 UP
| |
| +- icmp
| 11.11.11.2 up enabled
|
+- NODE ADDRESS 11.11.11.3 UP
|
+- icmp
11.11.11.3 up enabled

Note that each instance takes as its destination the same node it is associated with. This is because the dest value in my_icmp was left at the default *, which tells the instance to use the associated node as its destination. Assigning a specific address to dest, such as 11.11.11.1, would cause the monitor to verify all three addresses by checking that one address, making 11.11.11.2 and 11.11.11.3 dependent on 11.11.11.1.

Selecting the monitor template

Selecting a template is straightforward. Like icmp, each of the templates has a type based on the type of service it checks, for example, http, https, ftp, pop3, and takes that type as its name. (Exceptions are port-specific templates, like https_443, and the external template, which calls a user-supplied program.) To select a template, simply select the one that corresponds in name and/or type to the service you want to check. If more than one service is to be checked, for example http and https, more than one monitor can be placed on the node. (This creates a rule, namely that the node will not be considered up unless both monitors run successful checks.) You may not want to check all services available on a node specifically. If you want to verify only that the destination IP address is alive, or that the path to it through a transparent node is alive, use one of the simple templates, icmp or tcp_echo. If you want to verify TCP only, use the monitor template tcp.

All monitor templates are contained in the read-only file /etc/base_monitors.conf. The following sections describe each of the monitor templates, its function, and the information required to configure a monitor from it. The templates are divided into three groups based on the types of monitors they support: simple monitors, ECV (Extended Content Verification) monitors, and EAV (Extended Application Verification) monitors. Also described are the port-specific monitor templates, which are derived from the other types.

Working with templates for simple monitors

Simple monitors are those that check node addresses only and verify simple connections only. Templates for these monitors are icmp and tcp_echo.

Note: The templates icmp and tcp_echo are both usable as is, that is, they may be associated with nodes. It is important to understand, however, that using a template as is means that you are using the default attribute values. To change any of these values, you have to configure a custom monitor based on the template.

Using icmp

The icmp template uses Internet Control Message Protocol to make a simple node check. The check is successful if a response to an ICMP_ECHO datagram is received. icmp has no attributes other than the standard interval, timeout, and dest.

Figure 4.54 The icmp monitor template

 monitor icmp {
#type icmp
interval 5
timeout 16
dest *
}

Using tcp_echo

The tcp_echo template uses Transmission Control Protocol. The check is successful if a response to a TCP ECHO message is received. tcp_echo also supports transparent mode. In this mode, the node with which the monitor is associated is pinged through to the destination node. (For more information about transparent mode, refer to Using transparent and reverse modes, on page 4-149.)

To use tcp_echo, you must ensure that TCP ECHO is enabled on the nodes being monitored.

Figure 4.55 The tcp_echo monitor template

 monitor tcp_echo  {
#type tcp_echo
interval 5
timeout 16
dest *
//transparent
}

Working with templates for ECV monitors

ECV monitors attempt to retrieve explicit content from nodes using send and recv statements. These include http, https and tcp.

Note: The templates http, https, and tcp are all usable as is, and you may associate them with nodes. It is important to understand, however, that using a template as is means that you are using the default attribute values. To change any of these values, you have to configure a custom monitor based on the template.

Using tcp

The tcp template is for Transmission Control Protocol. A tcp monitor attempts to receive specific content. The check is successful when the content matches the recv expression. A tcp monitor takes a send string and a recv expression. If the send string is left blank, the service is considered up if a connection can be made. A blank recv string matches any response. Both transparent and reverse modes are options. (For more information about transparent and reverse modes, refer to Using transparent and reverse modes, on page 4-149.)

Figure 4.56 The tcp monitor template

 monitor tcp  {
#type tcp
interval 5
timeout 16
dest *:*
send ""
recv ""
//reverse
//transparent
}

Using http

The http template is for HyperText Transfer Protocol. Like a tcp monitor, an http monitor attempts to receive specific content from a web page, and unlike a tcp monitor, sends a user name and password. The check is successful when the content matches the recv expression. An http monitor uses a send string, a recv expression, username, password, and optional get, url, transparent and reverse statements. (If there is no password security, use blank strings [""] for username and password.) The optional get statement replaces the send statement, automatically filling in the string "GET". Thus the following two statements are equivalent:

send "GET/"
get "/"

The optional url statement takes the HTTP URL as a value and automatically fills in the dest value with the address the URL resolves to. (For more information about transparent and reverse modes, refer to Using transparent and reverse modes, on page 4-149.) Both transparent and reverse modes are also options. (For more information about the get and url statements, refer to Using send, receive, url, and get statements, on page 4-149.)

Figure 4.57 The http monitor template

 monitor http {
#type http
interval 5
timeout 16
dest *:*
send "GET /"
recv ""
username ""
password ""
//get
//url
//reverse
//transparent
}

Using https

The https template is for Hypertext Transfer Protocol Secure. An https monitor attempts to receive specific content from a web page protected by SSL security. The check is successful when the content matches the recv expression. An https monitor uses a send string, a recv expression, and a username and password (If there is no password security, use blank strings [""] for username and password.) The optional get statement replaces the send statement, automatically filling in the string "GET". Thus, the following two statements are equivalent:

send "GET/"
get "/"

The optional url statement takes the HTTPS URL as a value and automatically fills in the dest value with the address the URL resolves to

Figure 4.58 The https monitor template

 monitor https {
#type https
interval 5
timeout 16
dest *:*
send "GET /"
recv ""
//get
//url
username ""
password ""
}

Working with templates for EAV monitors

EAV monitors verify applications on the node by running those applications remotely, using an external service checker program located in the directory /user/local/lib/pingers. These include ftp, pop3, smtp, sq., nntp, imap, ldap, and radius. Also included is the template external, which has a run attribute to specify a user-added external monitor.

Using ftp

The ftp template is for File Transfer Protocol. The monitor attempts to download a specified file to the /var/tmp directory. The check is successful if the file is retrieved. The ftp monitor takes a get statement, username, and password. The get statement takes the full path to the file as a value. The optional url statement may be used in place of get. The url takes the FTP URL as a value and automatically fills in the dest value with the address the URL resolves to. (For more information about the get and url statements, refer to Using send, receive, url, and get statements, on page 4-149.)

Figure 4.59 The ftp monitor template

 monitor ftp {
#type ftp
interval 5
timeout 16
dest *:*
username ""
password ""
get ""
//url
}

Using pop3

The pop3 template is for Post Office Protocol. The check is successful if the monitor is able to connect to the server, log in as the indicated user, and log out. The pop3 monitor requires username and password.

Figure 4.60 The pop3 monitor template

 monitor pop3 {
#type pop3
interval 5
timeout 16
dest *:*
username ""
password ""
}

Using smtp

The smtp template is for Simple Mail Transport Protocol servers. An smtp monitor is an extremely simple monitor that checks only that the server is up and responding to commands. The check is successful if the mail server responds to the standard SMTP HELO and QUIT commands. An smtp monitor requires a domain name.

Figure 4.61 The smtp monitor template

 monitor smtp {
#type smtp
interval 5
timeout 16
dest *:*
domain ""
}

Using snmp_dca

The snmp_dca template is used for load balancing traffic to servers that are running an SNMP agent, such as UC Davis or Windows 2000. In addition to defining ratio weights for CPU, memory, and disk use, you can also define weights for use by users. Figure 4.62 shows the snmp_dca monitor template.

Figure 4.62 snmp_dca monitor template

 monitor type snmp_dca {
#type snmp_dca
interval 10
timeout 30
dest *:161
agent_type "UCD"
cpu_coefficient "1.5"
cpu_threshold "80"
mem_coefficient "1.0"
mem_threshold "70"
disk_coefficient "2.0"
disk_threshold "90"
}

For detailed information on using the snmp_dca template, see Configuring SNMP servers, on page 4-16.

Using snmp_dca_base

Like the snmp_dca template, the snmp_dca_base template is for load balancing traffic to servers that are running an SNMP agent, such as UC Davis or Windows 2000. However, this template should be used only when you want the load balancing destination to be based solely on user data, and not CPU, memory, or disk use. Figure 4.63 shows the snmp_dca_base monitor template.

Figure 4.63 snmp_dca_base monitor template

 monitor type snmp_dca_base {
#type snmp_dca_base
interval 10

timeout 30
dest *:161
}

For detailed information on using the snmp_dca_base template, see Configuring SNMP servers, on page 4-16.

Using nntp

The nntp template is for Usenet News. The check is successful if the monitor retrieves a newsgroup identification line from the server. An nntp monitor requires a newsgroup name (for example, "alt.cars.mercedes") and, if necessary, username and password.

Figure 4.64 The nntp monitor template

 monitor nntp {
#type nntp
interval 5
timeout 16
dest *:*
username ""
password ""
newsgroup ""
}

Using sql

The sql template is for service checks on SQL-based services such as Microsoft SQL Server versions 6.5 and 7.0, and also Sybase. The service checking is accomplished by performing an SQL login to the service. An executable program, tdslogin performs the actual login. The check is successful if the login succeeds.

An sql monitor requires a database (for example, "server_db"), username, and password.

Figure 4.65 The sql monitor template

 monitor sql {
#type sql
interval 5
timeout 16
dest *:*
username ""
password ""
database ""
}

Using imap

The imap template is for Internet Message Access Protocol. The imap monitor is essentially a pop3 monitor with the addition of the attribute folder, which takes the optional key message_num. The check is successful if the specified message number is retrieved. An imap monitor requires username, password, and a folder. It also takes an optional message number, message_num.

Figure 4.66 The imap monitor template

 monitor imap {
#type imap
interval 5
timeout 16
dest *:*
username ""
password ""
folder ""
/message_num ""
}

Using radius

The radius template is for Remote Access Dial-in User Service servers. The check is successful if the server authenticates the requesting user. A radius monitor requires a username, a password, and a shared secret string secret for the code number.

Note: Servers to be checked by a radius monitor typically require special configuration to maintain a high level of security while also allowing for monitor authentication.

Figure 4.67 The radius monitor template

 monitor radius {
#type radius
interval 5
timeout 16
dest *
username ""
password ""
secret ""
}

Using ldap

The ldap template is for Lightweight Directory Access Protocol, which implements standard X.500 for e-mail directory consolidation. A check is successful if entries are returned for the base and filter specified. An ldap monitor requires a username, a password, and a base and a filter string. The username is a distinguished name, that is, an LDAP-format user name. The base is the starting place in the LDAP hierarchy from which to begin the query. The filter is an LDAP-format key of what is to be searched for.

Note: Servers to be checked by an imap monitor typically require special configuration to maintain a high level of security while also allowing for monitor authentication.

Figure 4.68 A Sample monitor template

 monitor ldap {
#type ldap
interval 5
timeout 16
dest *:*
username ""
password ""
base ""
filter ""
}

Using external

The external template is for a user-supplied monitor. An external monitor requires the executable name (run) of that monitor and any command line arguments (args) required.

Figure 4.69 The external monitor template

 monitor external {
#type external
interval 5
timeout 16
dest *:*
run ""
args ""
}

Configuring a monitor

The second step in creating a monitor and placing it in service is to configure the monitor from the monitor template. Configuring a monitor consists of giving it a name distinct from the monitor template name and assigning values to all attributes that are not to be left at their default values (and adding any optional attributes that are not present by default, like reverse or transparent). You can do this using the Configuration utility or at the command line using the bigpipe monitor command.

To configure a monitor using the Configuration utility

  1. In the navigation pane, click Monitors.
    The Network Monitors screen opens.
  2. Click the Add button.
    The Add Monitor screen opens.
  3. In the Add Monitor screen, type in the name of your monitor (it must be different from the monitor template name), and select the monitor template you want to use.
  4. Click the Next button and you are guided through the configuration of your monitor.
  5. When you have finished configuring the monitor, click Done.

To configure a monitor from the command line

Use the bigpipe monitor command to configure the monitor at the command line. If you are defining the monitor with all attributes set to their default values, type:

b monitor <name> '{ use <template_name> }'

If you want to set one or more attributes to a new value, specify only those attributes and their values. For example, to create a tcp_echo monitor my_tcp_echo in bigpipe using the default values for the attributes interval, timeout, and dest, you would type:

b monitor my_tcp '{ use tcp_echo }'

If you are changing any of the default values, you need to specify only these changes. For example:

b monitor my_tcp-echo '{ use tcp_echo interval 10 timeout 20 }'

If you are using an optional attribute, such as transparent, add it to the list:

b monitor my_tcp-echo '{ use tcp_echo interval 10 timeout 20 transparent dest 198.192.112.13:22}'

Note: If you are configuring an snmp_dca or snmp_dca_base monitor, see also Configuring SNMP servers, on page 4-16.

Monitor attributes

Table 4.26 provides a summary of the monitor attributes and their definitions. For more information on the monitor templates and attributes, refer to Selecting the monitor template, on page 4-138

Monitor attributes

Attribute

Definition

interval <seconds>

Ping frequency time interval in seconds.

timeout <seconds>

Ping timeout in seconds.

dest <node_addr>

Ping destination node. <node_address> Usually *:* for simple monitors, *:* for all others, causing the monitor instance to ping the address or address:port for which it is instantiated. Specifying address and/or port forces the destination to that address/port.

send <string>

Send string for ECV. Default send and recv values are empty (""), matching any string.

recv <string>

Receive expression for ECV. Default send and recv values are empty (""), matching any string.

get <string>

For the http and https monitors get replaces the recv statement, automatically filling in "GET". For the ftp monitor get can be used to specify a full path to a file. This automatically fills in dest.

url

For the http and https, and ftp monitors, url replaces the recv statement, supplies a URL and automatically fills in dest with the URL address.

reverse

A mode that sets the node down if the received content matches the recv string.

transparent

A mode that forces pinging through the node to the dest address for transparent nodes, such as firewalls.

run <program>

An external user-added EAV program.

args <program_args>

List of command line arguments for external program. The args are quoted strings set apart by spaces.

username <username>

User name for services with password security. For ldap this is a distinguished, that is, LDAP-format user name.

password <password>

Password for services with password security.

newsgroup <newsgroup>

Newsgroup, for type nntp EAV checking only

database <database>

Database name, for type sql EAV checking only.

domain <domain_name>

Domain name, for type smtp EAV checking only

secret

Shared secret for radius EAV checking only.

folder

Folder name for imap EAV checking only.

message_num

Optional message number for imap EAV.

base

Starting place in the LDAP hierarchy from which to begin the query, for ldap EAV checking only.

filter

LDAP- format key of what is to be searched for, for ldap EAV checking only.

Entering string values

Except for interval, timeout, and dest, you should enter all attribute values as quoted strings, even if they are numeric, as in the case of code numbers.

Setting destinations

By default, all dest values are set to the wildcard "*" or "*:*". This causes the monitor instance created for a node to take that node's address or address and port as its destination. An explicit dest value is used only to force the instance destination to a specific address and/or port which may not be that of the node. For more information about setting destinations, refer to Associating the monitor with a node or nodes, on page 4-153.

Using send, receive, url, and get statements

The ECV monitor templates http, https, and tcp have the attributes send and recv for the send string and receive expression, respectively.

The most common send string is "GET /" which simply retrieves a default HTML page for a web site. To retrieve a specific page from a web site, simply enter a fully qualified path name:

"GET /www/support/customer_info_form.html"

The receive expression is the text string the monitor looks for in the returned resource. The most common receive expressions contain a text string that would be included in a particular HTML page on your site. The text string can be regular text, HTML tags, or image names.

The sample receive expression below searches for a standard HTML tag.

"<HEAD>"

You can also use the default null recv value "". In this case, any content retrieved is considered a match. If both send and recv are left empty, only a simple connection check is performed.

For http and ftp, the special attributes get or url may be used in place of send and recv statements. The attribute get takes the full path to the file as a value and automatically fills in the dest value with the address the path resolves to. The following two statements are equivalent:

send "GET/"
get "/"

The attribute url takes the URL as a value and automatically fills in the dest value with the address the URL resolves to. The URL is then resolved to supply the dest address automatically. The third statement below is equivalent to the first two combined:

dest 198.192.112.13:22
get "/"

url "ftp://www.my_domain.com/"

Using transparent and reverse modes

The ECV monitors have optional keywords transparent and reverse. (The keyword transparent may also be used by tcp_echo.) The normal and default mode for a monitor is to ping the dest node by an unspecified route and to mark the node up if the test is successful. There are two other modes, transparent and reverse.

In transparent mode, the monitor is forced to ping through the node it is associated with, usually a firewall, to the dest node. (In other words, if there are two firewalls in a load balancing pool, the destination node will always be pinged through the one specified and not through the one picked by the load balancing method.) In this way, the transparent node is tested as well: if there is no response, the transparent node is marked down. For more information about transparent mode, refer to Using transparent mode, on page 4-156.

In reverse mode, the monitor marks the node down when the test is successful. For example, if the content on your web site home page is dynamic and changes frequently, you may want to set up a reverse ECV service check that looks for the string "Error". A match for this string would mean that the web server was down. Transparent mode can also be used with tcp_echo.

Transparent and reverse modes cannot be used on the same monitor.

Testing SQL service checks

SQL service checks may require manual testing before being implemented in a monitor, as follows:

cd /usr/local/lib/pingers

./tdslogin 192.168.1.1 1433 mydata user1 mypass1

Replace the IP address, port, database, user, and password in this example with your own information.

You should receive the message:

Login succeeded!

If you receive the connection refused message, verify that the IP and port are correct.

If you are still having trouble, you should verify that you can log in using another tool. For example, if you have Microsoft NT SQL Server version 6.5, there is a client program ISQL/w included with the SQL software. This client program performs simple logins to SQL servers. Use this program to test whether you can login using the ISQL/w program before attempting logins from the BIG-IP.

On the SQL Server, you can run the SQL Enterprise Manager to add logins. When first entering the SQL Enterprise Manager, you may be prompted for the SQL server to manage.

You can register servers by entering the machine name, user name, and password. If these names are correct, the server will be registered and you will be able to click an icon for the server. When you expand the subtree for the server, there will be an icon for Logins.

Underneath this subtree, you can find the SQL logins. Here, you can change passwords or add new logins by right-clicking the Logins icon. Click this icon to open an option to Add login. After you open this option, type the user name and password for the new login, as well as which databases the login is allowed to access. You must grant the test account access to the database you specify in the EAV configuration.

Running user-added EAVs

You may add your own monitors to those contained in /user/local/lib/pingers. For running these added programs, the monitor template external is used. The executable program is specified as the value of the attribute run. By default, the monitor looks for the run program in /user/local/lib/pingers. If the program resides elsewhere, a fully qualified path name must be entered. Any command line arguments to be used with the program are entered as args values. For example, suppose the program my_pinger is to be run with a -q option, so that it would be entered on the command line as follows:

my_pinger -q

This monitor might be specified as follows:

b monitor custom '{ use external run "my_pinger" args "-q" }'

Alternatively, you may pass arguments to the external monitor as environment variables. For example, you might want to enter this command:

/var/my_pinger /www/test_files/first_test

This could be specified in the conventional manner:

b monitor custom '{ use external run "/var/my_pinger" args "www/test_files/first_test" }'

It could also be specified in this way:

b monitor custom '{ use external run "/var/my_pinger" DIRECTORY "www/test_files" FILE "first_test" }'

This defines the monitor as shown in Figure 4.70.

Figure 4.70 Monitor template for an external monitor

 monitor custom { 
use external
run "/var/my_pinger"
DIRECTORY "www/test_files"
FILE "first_test" }

This frees the monitor definition from the rigidity of a strictly ordered command line entry. The arguments are now order-independent and may be used or ignored by the external executable.

Showing, disabling, and deleting monitors

You can show, disable, and delete monitors using the Configuration utility or from the command line. Deleting a monitor removes it from the /config/bigip.conf file. Disabling a monitor instance simply removes that instance from service until it is re-enabled. Disabling a monitor (which can be performed only at the command line) disables all instances of the monitor. All monitor instances are enabled by default.

To show or delete a monitor using the Configuration utility

  1. In the navigation pane, click Monitors.
    A screen opens that lists monitors in two columns, System Supplied and User Defined.
  2. To show a monitor, simply click the monitor name.
  3. To delete a monitor, click the Delete button for the monitor. Note that only user-defined monitors can be deleted.

To show a monitor from the command line

You can display a selected monitor or all monitors using the bigpipe monitor show command:

b monitor <name> show

b monitor show all

To delete a monitor from the command line

You can delete a selected monitor using the bigpipe monitor delete command:

b monitor <name> delete

To disable a monitor instance using the Configuration utility

  1. In the navigation pane, click Monitors.
    The Monitors screen opens.
  2. Click the appropriate tab for the monitor instances: Basic Associations, Node Associations, or Node Address Associations. The resulting screen shows the existing associations (monitor instances).
  3. Click the node you want to disable.
    The Properties screen for that node opens.
  4. In the Monitor Instances portion of the screen, clear the Enable check box.
  5. Click Apply.
    The monitor instance is now disabled.

To disable a monitor or monitor instance from the command line

To disable a monitor, use the bigpipe monitor <name> disable command:

b monitor <name> disable

This has the effect of disabling all instances of the monitor, as shown in Figure 4.71.

Figure 4.71 All monitor instances disabled

 +- NODE  11.12.11.20:80   UP
| |
| +- http
| 11.12.11.20:80 up disabled
|
+- NODE 11.12.11.21:80 UP
| |
| +- http
| 11.12.11.21:80 up disabled
|
+- NODE 11.12.11.22:80 UP
|
+- http
11.12.11.22:80 ip disabled

To disable a monitor instance, use the bigpipe monitor instance <addr:port> disable command:

b monitor instance <addr:port> disable

Disabled monitors and instances may be re-enabled as follows:

b monitor <name> enable

b monitor instance <addr:port> enable

To delete a monitor with no node associations from the command line

You can delete a monitor if it has no existing node associations and no references in a monitor rule. To delete a monitor, use the bigpipe monitor <name> delete command:

b monitor my_http delete

If the monitor has instances, the instances must first be deleted using the bigpipe node <addr:port> monitor delete command. (Refer to Showing and deleting associations, on page 4-158.)

Associating the monitor with a node or nodes

Now that your monitor exists, the final step is to associate it with the nodes to be monitored. This creates an instance of the monitor for each node. At the command line, association is done using bigpipe node command:

b node <addr_list> monitor use <name>

For example, to associate monitor http with nodes 11.12.11.20:80, 11.12.11.21:80, and 11.12.11.22:80, the bigpipe node command would be as follows:

b node 11.12.11.20:80 11.12.11.21:80 11.12.11.22:80 monitor use http

This creates a monitor instance of http for each of these nodes. You can verify this association using the bigpipe monitor show command:

b node monitor show

This would produce the output shown in Figure 4.72.

Figure 4.72 The output of the b node monitor show command

 +- NODE  11.12.11.20:80   UP
| |
| +- http
| 11.12.11.20:80 up enabled
|
+- NODE 11.12.11.21:80 UP
| |
| +- http
| 11.12.11.21:80 up enabled
|
+- NODE 11.12.11.22:80 UP
|
+- http
11.12.11.22:80 ip enabled

The actual monitor instance for each node is represented by the output lines highlighted with bold text in Figure 4.72.

Reviewing types of association

While the term node association is applied generally, there are three types of association based on whether the monitor is associated with an address and a port, address(es) only, or port only. These are node association, address association, and port association.

  • Node association, strictly defined, is the association of a monitor with an address and port.
  • Address association is the association of a monitor with an address only.
  • Port association is the association of a monitor with a port only. For a port association, a wildcard character (*) is used to represent all addresses.

    Once a monitor has been associated with a node, address, or port, no other monitor can be associated with the same node, address or port. However, an address association does not prevent a monitor from being associated with a node of the same address, or the reverse.

Using a simple association

The http example given above is the simplest kind of association, a node association performed using a monitor with a dest value of *:*. It can be seen in Figure 4.57, on page 4-140, that in each case the instance destination node is identical to the node the monitor has been associated with. This is because the template http, shown in Figure 4.57, was used as is, with a dest value of *:*. Either or both wildcard symbols can be replaced by an explicit dest value by creating a new monitor based on http. This is referred to as node and port aliasing, described in the following section.

Using node and port aliasing

Usually the health of a node is checked by pinging that node. For this reason the dest attribute in the monitor template is always set to "*" or "*:*". This causes the monitor instance created for a node to take that node's address or address and port as its destination. An explicit dest value forces the instance destination to a specific address and/or port which may not be that of the node. This causes the monitor to ping that forced destination by an unspecified path. Suppose, for example, that the association performed using http instead used a monitor my_http with a dest value of *:443. The node association command would be identical except that http is now replaced with my_http:

b node 11.12.11.20:80 11.12.11.21:80 11.12.11.22:80 monitor use my_http

This creates three instances of the monitor with the following dest values as shown in Figure 4.73.

Figure 4.73 Node ports aliased

 +- NODE  11.12.11.20:80   UP
| |
| +- my_http
| 11.12.11.20:443 up enabled
|
+- NODE 11.12.11.21:80 UP
| |
| +- my_http
| 11.12.11.21:443 up enabled
|
+- NODE 11.12.11.22:80 UP
|
+- my_http
11.12.11.22:443 up enabled

This is referred to as port aliasing. The node itself can also be aliased, by assigning an explicit address to dest. For example, dest could set to 11.11.11.1:80. This is called node aliasing and for the nodes 11.12.11.20:80, 11.12.11.21:80, and 11.12.11.21:80 it would produce the following instances, (which are in fact one instance associated with three different nodes) as shown in Figure 4.74.

Figure 4.74 Node addresses aliased

 +- NODE  11.12.11.20:80   ADDR UP
| |
| +- my_http
| 11.11.11.1:80 checking enabled
|
+- NODE 11.12.11.21:80 ADDR UP
| |
| +- my_http
| 11.11.11.1:80 checking enabled
|
+- NODE 11.12.11.22:80 ADDR UP
|
+- my_http
11.11.11.1:80 checking enabled

Using transparent mode

Sometimes it is necessary to ping the aliased destination through a transparent node. Common examples are checking a router, or checking a mail or FTP server through a firewall. For example, you might want to check the router address 10.10.10.53:80 through a transparent firewall 10.10.10.101:80. To do this, you would specify 10.10.10.53:80 as the monitor dest address (a node alias) and add the flag transparent:

b monitor http_trans '{ use http dest 10.10.10.53:80 transparent }'

Then you would associate the monitor http_trans with the transparent node:

b node 10.10.10.101:80 monitor use http_trans

This causes the address 10.10.10 53:80 to be checked through 10.10.10.101:80. (In other words, the check of 10.10.10.53:80 is routed through 10.10.10.101:80.) If the correct response is not received from 10.10.10.53:80, then 10.10.10.101:80 is marked down.

Note: Transparent mode applies only to the ECV monitors and to tcp_echo.

Using logical grouping

In the preceding examples, only one monitor has been associated with the nodes. You may associate more than one monitor with a node or nodes by joining them with the Boolean operator and. This creates a rule, and the node is marked as down if the rule evaluates to false, that is, if not all the checks are successful. The most common example is the use of an HTTP monitor and an HTTPS monitor:

b node 11.12.11.20:80 monitor use my_http and my_https

The monitors themselves must be configured with the grouping in mind. For example, if the dest values of both monitors were set to *:*, then both monitor instances would try to ping the default port 80. This would both defeat the purpose of the HTTPS monitor and cause an automatic failure, since two monitors would be trying to ping the same address and port simultaneously.

Instead, monitor my_http should be given a dest value of *:* and monitor my_https should be given a dest value of *:443. This causes only the my_http monitor instances to default to 80. The my_https monitor instances are forced to the explicit port 443, avoiding a conflict as shown in Figure 4.75.

Figure 4.75 Use of a monitor rule

 MONITOR  my_http and https_443
|
+- NODE 11.12.11.20:80 UP
| |
| +- my_http
| | 11.12.11.20:80 up enabled
| |
| +- https_443
11.12.11.20:443 up enabled

Using wildcards to specify addresses

The wildcard * can be used to specify addresses. A wildcard address association creates instances of the monitor for all nodes load balanced by the BIG-IP:

b node * monitor use my_tcp_echo:

A wildcard with a port association creates instances of the monitor for every node configured to service that port:

b node *:80 monitor use my_http:

To associate a monitor using the Configuration utility

  1. In the navigation pane, click Monitors.
    The Network Monitors screen opens.
  2. Click one of three tabs:

    • If you are associating the monitor with a node (the IP address plus the port) click the Node Associations tab.
    • If you are associating the monitor with a node address only (the IP address minus the port), click the Node Address Associations tab.
    • If you are associating the monitor with a port only (the port minus the IP address), click the Port Associations tab.
  3. Regardless of the selection you made in step 2, a dialog box appears with the boxes Choose Monitor and Monitor Rule. Type the monitor name or select one from the list.
  4. If you want to associate more than one monitor, click the Move >> button to add the monitor name to the Monitor Rule box.
  5. Repeat the previous two steps for each monitor you want to associate with a node.
  6. Click Apply to associate the monitor(s).
    For additional information associating a monitor, click the Help button.

Showing and deleting associations

There are node commands for showing and deleting node associations.

To show or delete associations using the Configuration utility

  1. In the navigation pane, click Monitors.
    The Network Monitors screen opens.
  2. Click one of three tabs:

    • If you are showing or deleting a node association (a node is the IP address plus the port) click the Node Associations tab.
    • If you are showing or deleting a node address association (the IP address minus the port), click the Node Address Associations tab.
    • If you are showing or deleting a port association (the port minus the IP address), click the Port Associations tab.
      Regardless of the selection you made in step 2, a dialog box opens showing existing associations, and with a Delete Existing Associations check box.
  3. Delete associations by checking the box, then clicking Apply.

    Note: For wildcard address associations, the wildcard (*) association itself is shown in addition to each of the individual associations it produces. To delete all these associations, it is necessary to delete the wildcard association.

To show associations from the command line

You can display a selected node association or all node associations using the bigpipe node monitor show command:

b node monitor show

b node <addr:port> monitor show

To delete associations from the command line

You can delete a selected node association or all node associations using the bigpipe node monitor delete command:

b node <addr:port> monitor delete

In deleting specific monitor instances, it is important to consider how the association was made. If a monitor instance was created using a wildcard address, the wildcard must be deleted. For example, if multiple associations were created by entering b node *:80 monitor use my_tcp_echo, you would delete it by typing:

b node *:80 monitor delete

If multiple associations were created by entering b node * monitor use my_tcp_echo, you would delete it by typing:

b node * monitor delete