In addition to the basic setup features available on the BIG/ip Controller, a number of special setup features can be used to optimize your network. This chapter describes special setup options available on the BIG/ip Controller. These features are optional, and may not be required in your implementation of the BIG/ip Controller. The following topics are described in this chapter:
Load balancing is an integral part of the BIG/ip Controller. A load balancing mode defines, in part, the logic that a BIG/ip Controller uses to determine which node should receive a connection hosted by a particular virtual server. The BIG/ip Controller supports specialized load balancing modes that dynamically distribute the connection load, rather than following a static distribution pattern such as Round Robin. Dynamic distribution of the connection load is based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. The following section describes how each load balancing mode distributes connections, as well as how to set the load balancing mode on the BIG/ip Controller.
Note: These load balancing modes apply globally. For information about the specific load balancing methods designed for use with pools, see Chapter 3, Introducing Intelligent Traffic Control (ITC).
Individual load balancing modes take into account one or more dynamic factors, such as current connection count. Because each application of the BIG/ip Controller is unique, and node performance depends on a number of different factors, we recommend that you experiment with different load balancing modes, and choose the one that offers the best performance in your particular environment.
Note: It is important to note that the load balancing methods described in this section are the advanced load balancing modes. For more information about Round Robin or Ratio mode, see the BIG/ip Controller Getting Started Guide.
Fastest mode passes a new connection based on the fastest response of all currently active nodes. Fastest mode may be particularly useful in environments where nodes are distributed across different logical networks.
Least Connections mode is relatively simple in that the BIG/ip Controller passes a new connection to the node with the least number of current connections. Least Connections mode works best in environments where the servers or other equipment you are load balancing have similar capabilities.
Observed mode uses a combination of the logic used in the Least Connection and Fastest modes. In Observed mode, nodes are ranked based on a combination of the number of current connections and the response time. Nodes that have a better balance of fewest connections and fastest response time receive the a greater proportion of the connections. Observed mode also works well in any environment, but may be particularly useful in environments where node performance varies significantly.
Predictive mode also uses the ranking methods used by Observed mode, where nodes are rated according to a combination of the number of current connections and the response time. However, in Predictive mode, the BIG/ip Controller analyzes the trend of the ranking over time, determining whether a node's performance is currently improving or declining. The nodes with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. Predictive mode works well in any environment.
Priority mode is a special type of round robin load balancing. In Priority mode, you define groups of nodes and assign a priority level to each group. The BIG/ip Controller begins distributing connections in a round robin fashion to all nodes in the highest priority group. If all the nodes in the highest priority group go down or hit a connection limit maximum, the BIG/ip Controller begins to pass connections on to nodes in the next lower priority group.
For example, in a configuration that has three priority groups, connections are first distributed to all nodes set as priority 3. If all priority 3 nodes are down, connections begin to be distributed to priority 2 nodes. If both the priority 3 nodes and the priority 2 nodes are down, connections then begin to be distributed to priority 1 nodes, and so on. Note, however, that the BIG/ip Controller continuously monitors the higher priority nodes, and each time a higher priority node becomes available, the BIG/ip Controller passes the next connection to that node.
The load balancing mode is a system property of the BIG/ip Controller, and it applies to all standard and wildcard virtual servers defined in the configuration.
The command syntax for setting the load balancing mode is:
bigpipe lb <mode name>
Table 2.1 describes the valid options for the <mode name> parameter.
|priority||Sets load balancing to Priority mode.|
|least_conn||Sets load balancing to Least Connections mode.|
|fastest||Sets load balancing to Fastest mode.|
|observed||Sets load balancing to Observed mode.|
|predictive||Sets load balancing to Predictive mode.|
Note: It is important to note that the load balancing methods described in this section are the advanced load balancing modes. For more information about Round Robin or Ratio mode, see the BIG/ip Controller Getting Started Guide.
If you set the load balancing mode to either Ratio mode or Priority mode, you need to set a special property on each node address.
The bigpipe ratio command sets the ratio weight for one or more node addresses:
bigpipe ratio <node IP> [<node IP>...] <ratio weight>
The following example defines ratio weights and priority for three node addresses. The first command sets the first node to receive half of the connection load. The second command sets the two remaining node addresses to each receive one quarter of the connection load.
bigpipe ratio 192.168.10.01 2
bigpipe ratio 192.168.10.02 192.168.10.03 1
Warning: If you set the load balancing mode to Ratio or Priority, you must define the ratio or priority settings for each node address. The value you define using the bigpipe ratio command is used as the ratio value if Ratio is the currently selected load balancing mode, and the same value is used as the priority level if Priority is the currently selected load balancing mode.
Filters control network traffic by setting whether packets are forwarded or rejected at the external network interface. Filters apply to both incoming and outgoing traffic. When creating a filter, you define criteria which are applied to each packet that is processed by the BIG/ip Controller. You can configure the BIG/ip Controller to forward or block each packet based on whether or not the packet matches the criteria.
The BIG/ip Controller supports two types of filters, IP filters and rate filters.
Typical criteria that you define in IP filters are packet source IP addresses, packet destination IP addresses, and upper-layer protocol of the packet. However, each protocol has its own specific set of criteria that can be defined.
For a single filter, you can define multiple criteria in multiple, separate statements. Each of these statements should reference the same identifying name or number, to tie the statements to the same filter. You can have as many criteria statements as you want, limited only by the available memory. Of course, the more statements you have, the more difficult it is to understand and maintain your filters.
When you define an IP filter, you can filter traffic in two ways:
Note: For information on configuring IP filters and rate filter on the command line, refer to the IPFW man page by typing man ipfw on the command line. You can configure more complex filtering through the IPFW command line interface.
In addition to IP filters, you can also define rates of access by using a rate filter. Rate filters consist of the basic filter and a rate class. Rate classes define how many bits per second are allowed per connection and the number of packets in a queue.
Rate filters are a type of extended IP filter. They use the same IP filter method, but they apply a rate class which determines the volume of network traffic allowed through the filter.
Rate filters are useful for sites that have preferred clients. For example, an e-commerce site may want to set a higher throughput for preferred customers, and a lower throughput for random site traffic.
Configuring rate filters involves both creating a rate filter and a rate class. When you configure rate filters, you can use existing rate classes. However, if you want a new rate filter to use a new rate class, you must configure the new rate class before you configure the new rate filter.
After you have added a rate class, you can configure rate filters for your system.
When you configure a BIG/ip Controller with more than two interface cards installed, you need to address the following issues:
The first step in configuring the BIG/ip Controller with additional interfaces is to run the First-Time Boot utility. This utility detects how many NICs are present in the BIG/ip Controller and displays a list of NICS detected. Choose the interface from the list that you want to configure.
You can also designate one of your additional internal NICs with the IP address for which access is permitted for network administration using SSH (or Telnet for international users).
The First-Time Boot utility, config, detects and configures additional interfaces if they are present in the BIG/ip Controller.
As Administrator with root-level permission, enter the following command from the command line:
When asked to configure the web server, you are prompted to define a domain name for the interface.
The First-Time Boot utility creates a new /etc/netstart script which supports more than two NICs. It also modifies the /etc/ethers and the interface entries in the BIG/db database.
You may need to edit the /etc/bigip.conf file using a text editor such as vi or pico to add the appropriate interface statements. For example, if you want to designate exp2 as an internal, destination processing interface, add the following lines to the /etc/bigip.conf file:
interface exp0 dest enable
interface exp0 source disable
interface exp0 adminport lockdown
interface exp2 failsafe disarm
interface exp2 timeout 30
Once you are done editing the bigip.conf file, reboot the BIG/ip Controller, or restart bigd by typing bigd on the command line and pressing Enter, in order to implement your changes.
When you define a virtual server on a BIG/ip Controller that has more than one external interface (destination processing), you need to specify the external interface (destination processing) that the virtual server's address is associated with.
You can define virtual servers with the bigpipe vip command. Normally, a virtual server is added to the external interface with a network address that matches the network of the virtual address. However, with multiple NICs, you can specify which external interface (destination processing) a virtual server is added to using the bigpipe vip command. To do this, add the <ifname> argument to the command.
bigpipe vip <virt addr>:<port>[/<bitmask>] [<ifname>] \
[unit <ID>] define <node addr>:<port> ... <node addr>:<port>
bigpipe vip <virt addr>:<port> [<ifname>] [unit <ID>] [netmask \
<netmask> [broadcast <broadcast_ip>]] \
define <node addr>:<port> ... \ <node addr>:<port>
You can set the <ifname> parameter to none if you want to prevent BIG/ip from issuing ARP requests for a specific virtual server. The traffic for a virtual server is accepted on any interface with destination processing enabled, even if the BIG/ip Controller only responds to ARP requests on one interface or if you specify none.
Note: This has the same effect as using the sysctl variable bigip.vipnoarp, but on a server-by-server basis. The sysctl variable bigip.vipnoarp is deprecated. We recommend defining the interface none.
The following example shows how to define a virtual server that is added to a Gigabit Ethernet NIC.
bigpipe vip 18.104.22.168:80/24 sk0 define 22.214.171.124:80 \
When you define a NAT address on a BIG/ip Controller that has more than one external interface, you need to specify the external interface that the virtual server's address is associated with.
When mapping a network address translation with the bigpipe nat command, you must now specify which external interface a virtual IP address is added to by using the <ifname> parameter.
bigpipe nat <internal_ip> to <external_ip> [/<bitmask>] \
[<ifname>] [unit <ID>]
bigpipe nat <internal_ip> to <external_ip> [netmask \
<netmask>] [broadcast <broadcast_ip>] |/<bitmask>] \
[<ifname>] [unit <ID>]
The following example shows how to define a NAT where the IP address represented by <external_ip> is added to an Intel NIC.
bigpipe nat 126.96.36.199 to 10.0.140.100/24 exp0
When you define a SNAT address on a BIG/ip Controller that has more than one external interface, you need to specify the external interface that the virtual server's address is associated with.
When mapping a secure network address translation with the bigpipe snat command, you must specify which external interface a virtual IP address is added to by using the <ifname> parameter.
bigpipe snat map <internal_ip> to <external_ip> [/<bitmask>] \
[<ifname>] [unit <ID>]
bigpipe snat map <internal_ip> to <external_ip> [<ifname>]
[netmask] <netmask> [broadcast <broadcast_ip>] | /<bitmask>]
The following example shows how to define a SNAT where the IP address represented by <external_ip> is added to an Intel NIC.
bigpipe snat map 188.8.131.52 to 10.0.140.100/24 exp0
Use Router Discovery Protocol (RDP) for routing on a BIG/ip Controller with more than one interface. For router configuration information, please refer to documentation included with your router.
The following two scenarios for configuring your network with more than two NICs contain important details related to creating virtual servers.
Each scenario configuration has advantages and disadvantages related to how you set up your virtual servers, which are detailed in the following descriptions. For instructions on how to create virtual servers on specific interfaces, see Specifying an interface for a virtual address, on page 2-13.
When your network is configured with one gateway to the Internet, and has two routers connected to two BIG/ip Controllers behind that gateway, we recommend that you connect the first router to one of the external interfaces on each BIG/ip Controller and the other router to the remaining external interfaces on each BIG/ip Controller for maximum redundancy.
If the gateway is running OSPF, it maintains redundancy by ensuring that there is only one path from your network to the Internet. In the unlikely event that the active router should fail, OSPF determines that the router is not functioning properly, and sends subsequent connections to the second router. Existing connections will persist by going through the second router. Under such conditions, when you create a virtual server, we recommend that you create it to use the default interface.
By using the default interface, the virtual server is guaranteed to handle connections in an efficient manner by cooperating with OSPF's attempts to compensate for the failed router. Otherwise, if the virtual server is configured to use one specific external interface, there is no way for connections to arrive at the virtual server when the router leading to it fails.
When your network is configured with two routers to the Internet, and has two BIG/ip Controllers behind them, we also recommend that you connect the first router to one of the external interfaces on each BIG/ip Controller and the other router to the remaining external interfaces on each BIG/ip Controller. However, in this configuration, you have two entry points into your network, one through each router.
You have the flexibility to decide how you want clients to access various web sites on your virtual servers based on how the virtual servers are created. For example, your research department uses an intranet site to exchange information and that sensitive material needs to be protected. Then, by restricting access to the external interface to only your researchers, you protect the information. On the other hand, you want your employees, including the researchers, to have access to the Human Resources information on the intranet.
You would then create the virtual server that hosts the Human Resources intranet using the default external interface so that any employee connecting from any location can make a connection to that virtual server.
The BIG/ip Controller supports up to 40,000 virtual servers and nodes combined. Larger configurations on a BIG/ip Controller, such as those that exceed 1,000 virtual servers or 1,000 nodes, introduce special configuration issues. To ensure a high performance level, you need to change certain aspects of the BIG/ip Controller's management of virtual servers and nodes. The following steps can be taken to optimize a large configuration.
The BIG/ip Controller maintains an IP alias on its external interface for each virtual address that it manages. IP aliases are broadcast on the network when a virtual server is defined, and also each time a BIG/ip Controller switches from standby mode to active mode in a redundant system. If you have defined thousands of virtual addresses in the BIG/ip Controller configuration, the corresponding ARP requests may lead to a significant increase in network traffic.
This type of configuration also increases fail-over recovery time in BIG/ip redundant systems. When a fail-over occurs, the BIG/ip Controller that becomes the active machine creates an IP alias for each virtual server that it manages. Normally, this process takes less than one second. However, if the BIG/ip Controller has 8,000 virtual servers, this process can take as long as 90 seconds. The active BIG/ip Controller is unresponsive during the time it creates the IP aliases, and it cannot begin processing connections until the IP aliasing is complete.
To ensure a fast fail-over process, and to help reduce the amount of ARP requests a router must make, you should run the BIG/ip Controller in bigip.vipnoarp mode. In bigip.vipnoarp mode, the BIG/ip Controller does not create IP aliases for virtual servers. Instead, network traffic bound for virtual servers configured on the BIG/ip Controller are routed using the BIG/ip Controller's external interface as a gateway. Configuring bigip.vipnoarp mode is a two-step process:
Note: You can enable the noarp option on a virtual server basis. For more information about enabling this option on an individual virtual server basis, see Specifying an interface for a virtual address, on page 2-13
Note: You can enable bigip.vipnoarp mode only if you have the ability to add a gateway route to your router. Note that in redundant systems, you need to use the shared external IP address as the gateway address for the virtual servers configured on the BIG/ip Controller.
In the router configuration, you need to define a static route as the gateway for each virtual address managed by the BIG/ip Controller. The static route should set the gateway address to the IP address of the external interface on the BIG/ip Controller. For example, if the shared external address of a BIG/ip redundant system is 184.108.40.206, and all virtual servers configured on the BIG/ip redundant system use IP addresses 220.127.116.11 through 18.104.22.168, you need to configure the router to use 22.214.171.124 as a gateway to the 11.0.1.* subnet. Such a definition on a UNIX-like router would read:
route add -net 126.96.36.199 gw 188.8.131.52
In the F5 Configuration utility, the bigip.vipnoarp mode setting is under BIG/ip sysctl configuration. To turn this mode on, simply check the Disable IP Aliases on Virtual Servers box. To turn this mode off, clear the Disable IP Aliases on Virtual Servers box.
You can activate bigip.vipnoarp mode in one of two ways:
If you choose to edit the /etc/rc.sysctl file, you simply need to add the following line to the file to activate vipnoarp mode:
sysctl -w bigip.vipnoarp=1
To deactivate bigip.vipnoarp mode, you can either comment the line out, or delete it from the /etc/rc.sysctl file altogether. Once you edit the file, the changes do not take affect until you reboot the system.
To immediately activate bigip.vipnoarp mode, type the following on the command line:
sysctl -w bigip.vipnoarp=1
bigpipe -f /etc/bigip.conf
To immediately deactivate bigip.vipnoarp mode, type the following on the command line:
sysctl -w bigip.vipnoarp=0
bigpipe -f /etc/bigip.conf
The BIG/ip Controller checks node status at user-defined intervals in two different ways:
If a BIG/ip Controller's configuration includes thousands of nodes, the node pings and service checks begin to take up more resources on both the BIG/ip Controller and the servers than is preferred. You can significantly reduce the number of node pings and service checks in configurations that have a group of node addresses which are all IP aliases on the same server. For each group of node addresses that points to a given server, you can select one node address out of the group to represent all node addresses in the group. The representative node address is referred to as the node alias. When the BIG/ip Controller issues a node ping or service check, it sends the ping or performs the service check only on the node alias, rather than on all nodes in the group. If the BIG/ip Controller receives a valid response before the time-out expires, it marks all nodes associated with the node alias as up and available to receive connections. If the BIG/ip Controller does not receive a valid response before the time-out expires, it marks all of the nodes associated with the node alias as down.
You can set the BIG/ip Controller to use a node alias for nodes that are configured for service checks; however, there are some limitations to this implementation. Service checks are port-specific, unlike node pings which are merely sent to a node address. If you assign a node alias to a node that uses service check, the node alias must be configured to support the port number associated with the node. If the node alias is not configured properly, the BIG/ip Controller can not establish a conversation with the service that the specific node supports, and the service check is invalid.
Note: If you have configured different ports on each node to handle a specific Internet service and you want to use IP aliases, you can use BIG/pipe commands to work around the situation. Refer to the BIG/ip Controller Reference Guide, BIG/pipe Command Reference for more information about the bigpipe alias command.
In the F5 Configuration utility, each node address has a set of properties associated with it, including the Node Alias property. Note that before you define a node alias for a specific node address, you may want to check the properties for each node that uses the node alias. The node alias must support each port used by a node that is configured for service check, otherwise the service check results are invalid.
The BIG/pipe command line utility allows you to set node aliases for multiple nodes at one time. With the bigpipe alias command, you can do three things:
For details about working with the bigpipe alias command, refer to the BIG/ip Controller Reference Guide, BIG/pipe Command Reference.
The versatile interfaces option adds more flexibility for configuring interfaces. You can now change both the source address or destination address and/or route of an IP packet.
In previous versions of the BIG/ip Controller, interfaces were designated as internal or external. With this version of the BIG/ip Controller you can configure specific interface properties based on the properties in Table 2.2.
|Interface type||Interface properties|
|Internal||Processes source addresses Administrative ports open|
|External||Processes destination addresses Administrative ports locked down|
The ability to change the source or destination can be turned on independently. Essentially, this means you can configure an interface so that it handles traffic going to virtual servers and, independently, you can configure the interface to handle traffic coming in from nodes. You can configure virtual servers and nodes on each interface installed on the BIG/ip Controller. This allows for the most flexible processing of packets by the BIG/ip Controller. When either the source or destination processing feature is turned off on an interface, there is a gain in performance.
When you enable destination processing on a BIG/ip Controller interface, the interface functions in the following manner:
When you enable source processing on a BIG/ip Controller interface, the interface functions in the following manner:
You can turn on both source and destination processing for an interface. This is possible because their functions do not overlap. For example, a NAT changes the source address on packets coming from clients so that they look like they have a different IP address, and virtual servers change the destination address to load balance the destination. There is no reason why you cannot do both the NAT translation and the virtual server translation. There are some combinations of virtual server and NAT source processing and virtual server and NAT destination processing that do not make sense. For example, if a virtual server processes a packet during source processing, the packet is not handled by virtual server destination processing. Also, if a virtual server processes a packet during destination processing, the packet is not handled by virtual server source processing.
When destination processing is enabled on an interface, the BIG/ip Controller processes packets arriving at the interface when those packets are addressed to a virtual server, SNAT, or NAT external address.
It is useful to note that there are two independent activities associated with destination processing: routing and translation. For example, wildcard virtual servers load balance connections across transparent network devices (such as a router or firewall), but they do not perform translation. In fact, translation can be turned off for all virtual servers (see Configuring transparent virtual servers, on page 2-32). Also, with the new forwarding virtual servers, neither next hop load balancing nor translation will occur for connections. These virtual servers only forward packets and so connections can pass through BIG/ip Controller without being manipulated in any way.
When you plan which type of processing to use in the BIG/ip Controller configuration, consider these questions:
These questions help identify what kind of processing is required for the network interfaces on the BIG/ip Controller.
When source translation processing is enabled on an interface, then the BIG/ip Controller processes packets arriving at the interface when those packets are coming from a node, SNAT, or NAT internal address. In this situation, the interface rewrites the source address of the IP packet, changing it from the real server's IP address, or internal NAT address, to the virtual server or external NAT address, respectively. Also, when the new last hop feature is enabled on a virtual server (see Using per-connection routing, on page 2-29), the packet is routed back to the network device that first transmitted the connection request to the virtual server.
Use the following syntax to configure source and destination processing on the specified interface:
bigpipe interface <interface> dest [ enable |
bigpipe interface <interface> source [ enable |
The following example command enables destination processing on the interface exp0:
bigpipe interface exp0 dest enable
The following example command enables source processing on the interface exp1:
bigpipe interface exp1 source enable
· To enable source processing for this interface, click the Enable Source Processing check box.
· To enable destination processing for this interface, click the Enable Destination Processing check box.
You can use the adminport option to control the security on an interface. The lockdown keyword configures the port lockdown used in previous versions of the BIG/ip Controller on the specified interface. If you use this option when you configure an interface, only ports essential to the configuration and operation of BIG/ip Controller and 3DNS Controller are opened. The open keyword allows all connections to and from BIG/ip Controller through the interface you specify.
Use the following syntax to configure interface security on the specified interface:
bigpipe interface <interface> adminport lockdown
bigpipe interface <interface> adminport open
Use the following example command to lock down connections to all ports except the administration ports on exp0:
bigpipe interface exp0 adminport lockdown
Use the following example command to allow connections to all ports on exp1:
bigpipe interface exp1 adminport open
Choose this option to lock down all ports except the ports used for administrative access on this interface.
Choose this option to open allow connections to all ports on this interface.
You can use the BIG/ip Controller virtual server options in combinations that match the hardware and load balancing needs of your network. This section describes how advanced virtual server configurations including:
In situations where the BIG/ip Controller accepts connections for virtual servers from more than one router or firewall, you can send the return data back through the same device from which the connection originated. You can use this option to spread the load among outbound routers or firewalls, or to ensure that connections go through the same device if that device is connection-oriented, such as a proxy, cache, or VPN router.
The device from which a connection originated is sometimes referred to as the last hop to the BIG/ip Controller. You can configure the BIG/ip Controller to send packets back to the device from which the connection originated when that device is part of a last hop pool of devices associated with a virtual server.
To set up per-connection routing, you must first set up a last hop pool. A last hop pool defines the list of routers as a pool from which the BIG/ip Controller receives packets. For detailed information about setting up a pool, see Chapter 3, Working with Intelligent Traffic Control.
The BIG/ip Controller determines the MAC address of the routers when the pool is defined. Then the pool is associated with the virtual server by using the lasthop keyword to specify the last hop pool for the virtual server. Then, when a packet arrives for the virtual server, the MAC address that the packet came from is matched up with the MAC address of the members of the last hop pool. The IP address of the matching member is stored with the connection as the last hop address. Then, connections coming from nodes and heading out towards the client are sent to the last hop address, instead of to the default route.
Use the following syntax to configure last hop pools for virtual servers:
bigpipe vip <vip>:<port> lasthop pool <pool_name>
For example, you might use the following command:
bigpipe vip 192.168.1.10:80 lasthop pool
Before you follow this procedure, you must configure at least one pool (for your routers) and one virtual server.
A forwarding virtual server is just like other virtual servers, except that the virtual server has no nodes to load balance. It simply forwards the packet directly to the node. Connections are added, tracked, and reaped just as with other virtual servers. You can also view statistics for forwarding virtual servers.
Use the following syntax to configure forwarding virtual servers:
bigpipe vip <vip>:<port> [ netmask <netmask> ] forward
For example, to allow only one service in:
bigpipe vip 184.108.40.206:80 forward
Use the following command to allow only one server in:
bigpipe vip 220.127.116.11:0 forward
To forward all traffic:
bigpipe vip 0.0.0.0:0 forward
Currently, there can be only one wildcard virtual server, whether that is a forwarding virtual server or not. In some of the configurations described here, you need to set up a wildcard virtual server on one side of the BIG/ip Controller to load balance connections across transparent devices. Another wildcard virtual server is required on the other side of the BIG/ip Controller to forward packets to virtual servers receiving connections from the transparent devices and forward them to their destination. You can use another new feature, the transparent device persistence, with forwarding virtual servers, to route connections back through the device from which the connection originated. For more information about per-connection routing, see Using per-connection routing, on page 2-29. In these configurations, there you would need to create a forwarding virtual server for each possible destination network or host if a wildcard virtual server is already defined to handle traffic coming from the other direction. For an example of these configuration settings in a network, see VPN load balancing, on page 7-17.
A new option for virtual servers adds the ability to control whether address translation is enabled for a virtual. By default, wildcard virtual servers have translation turned off. A new translate keyword allows you to turn off address translation for non-wildcard virtual servers. This option is useful when the BIG/ip Controller is load balancing devices which have the same IP address. This is typical with the nPath routing configuration where duplicate IP addresses are configured on the loopback device of several servers.
Use the following syntax to configure address translation for virtual servers:
bigpipe vip <vip>:<port> translate addr [ enable | disable ]
For example, use the following syntax to configure address translation for a virtual server 18.104.22.168:80:
bigpipe vip 22.214.171.124:80 translate addr disable
A new option for virtual servers adds the ability to control whether port translation is enabled for a virtual server except for wildcard virtual servers. Port translation is turned on by default. An exception to this is if the port defined for a member is port zero. Members with a zero port cannot do translation because zero is not a valid port.
Use the following syntax to configure virtual server port translation:
bigpipe vip <vip>:<port> translate port [ enable |
For example, if you want to disable port translation for the virtual server/port combination 126.96.36.199:0, type the following command:
bigpipe vip 188.8.131.52:0 translate port
A new option for virtual servers provides the ability to reset connections if a service is down. When this attribute is enabled for a virtual server, the BIG/ip Controller sends resets to the end points of TCP connections when it is determined that the service they are using has gone down.
This is currently only enabled for service checks that mark a node down. Node pings that time out do not cause resets to be sent.
Only TCP connections can receive a Reset. UDP connections are not aborted because there is no shutdown mechanism for UDP connections.
Use the following syntax to reset connections when a service is down:
bigpipe vip <vip:port> svcdown_reset [enable |