Applies To:

Show Versions Show Versions

Manual Chapter: Virtual Servers
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

A virtual server is one of the most important components of any BIG-IP® Local Traffic ManagerTM configuration. When you configure a virtual server, you create two Local Traffic Manager objects: a virtual server and a virtual address.
A virtual server is a traffic-management object on the BIG-IP system that is represented by an IP address and a service. Clients on an external network can send application traffic to a virtual server, which then directs the traffic according to your configuration instructions. The main purpose of a virtual server is often to balance traffic load across a pool of servers on an internal network. Virtual servers increase the availability of resources for processing client requests.
Not only do virtual servers distribute traffic across multiple servers, they also treat varying types of traffic differently, depending on your traffic-management needs. For example, a virtual server can enable compression on HTTP request data as it passes through the BIG-IP system, or decrypt and re-encrypt SSL connections and verify SSL certificates. For each type of traffic, such as TCP, UDP, HTTP, SSL, SIP, and FTP, a virtual server can apply an entire group of settings, to affect the way that Local Traffic Manager manages that traffic type.
A virtual server can also enable session persistence for a specific traffic type. Through a virtual server, you can set up session persistence for HTTP, SSL, SIP, and MSRDP sessions, to name a few.
Finally, a virtual server can apply an iRule, which is a user-written script designed to inspect and direct individual connections in specific ways. For example, you can create an iRule that searches the content of a TCP connection for a specific string and, if found, directs the virtual server to send the connection to a specific pool or pool member.
Directing traffic to a load balancing pool
A Standard virtual server (also known as a load balancing virtual server) directs client traffic to a load balancing pool and is the most basic type of virtual server. When you first create the virtual server, you assign an existing default pool to it. From then on, the virtual server automatically directs traffic to that default pool.
Sharing an IP address with a VLAN node
You can set up a Forwarding (Layer 2) virtual server to share the same IP address as a node in an associated VLAN. To do this, you must perform some additional configuration tasks. These tasks consist of: creating a VLAN group that includes the VLAN in which the node resides, assigning a self-IP address to the VLAN group, and disabling the virtual server on the relevant VLAN.
Forwarding traffic to a specific destination IP address
A Forwarding (IP) virtual server is just like other virtual servers, except that a forwarding virtual server has no pool members to load balance. The virtual server simply forwards the packet directly to the destination IP address specified in the client request. When you use a forwarding virtual server to direct a request to its originally-specified destination IP address, Local Traffic Manager adds, tracks, and reaps these connections just as with other virtual servers. You can also view statistics for a forwarding virtual servers.
Increasing the speed of processing HTTP traffic
A Performance (HTTP) virtual server is a virtual server with which you associate a Fast HTTP profile. Together, the virtual server and profile increase the speed at which the virtual server processes HTTP requests.
Increasing the speed of processing Layer 4 traffic
A Performance (Layer 4) virtual server is a virtual server with which you associate a Fast L4 profile. Together, the virtual server and profile increase the speed at which the virtual server processes Layer 4 requests.
Relaying DHCP traffic
You can create a type of virtual server that relays Dynamic Host Control Protocol (DHCP) messages between clients and servers residing on different IP networks. Known as a DHCP relay agent, a BIG-IP system with a DHCP Relay type of virtual server listens for DHCP client messages being broadcast on the subnet and then relays those messages to the DHCP server. The DHCP server then uses the BIG-IP system to send the responses back to the DHCP client. Configuring a DHCP Relay virrtual server on the BIG-IP system relieves you of the tasks of installing and running a separate DHCP server on each subnet.
When you create a virtual server, you specify the pool or pools that you want to serve as the destination for any traffic coming from that virtual server. You also configure its general properties, some configuration options, and other resources you want to assign to it, such as iRules or session persistence types.
To configure and manage virtual servers, log in to the BIG-IP Configuration utility, and on the Main tab, expand Local Traffic, and click Virtual Servers.
A virtual address is the IP address with which you associate a virtual server. For example, if a virtual servers IP address and service are 10.10.10.2:80, then the IP address 10.10.10.2 is a virtual address.
You can create a many-to-one relationship between virtual servers and a virtual address. For example, you can create the three virtual servers 10.10.10.2:80, 10.10.10.2:443, and 10.10.10.2:161 for the same virtual address, 10.10.10.2.
You can enable and disable a virtual address. When you disable a virtual address, none of the virtual servers associated with that address can receive incoming network traffic.
You create a virtual address indirectly when you create a virtual server. When this happens, Local Traffic Manager internally associates the virtual address with a MAC address. This in turn causes the BIG-IP system to respond to Address Resolution Protocol (ARP) requests for the virtual address, and to send gratuitous ARP requests and responses with respect to the virtual address. As an option, you can disable ARP activity for virtual addresses, in the rare case that ARP activity affects system performance. This most likely occurs only when you have a large number of virtual addresses defined on the system.
Note: To ensure that a server response returns through the BIG-IP system, you must configure the default route on the server to be an internal VLANs self IP address. If you cannot do this because the server is on a different network than the BIG-IP system, you can create a SNAT instead.
A host virtual server represents a specific site, such as an Internet web site or an FTP site, and it load balances traffic targeted to content servers that are members of a pool.
The IP address that you assign to a host virtual server should match the IP address that Domain Name System (DNS) associates with the sites domain name. When the BIG-IP system receives a connection request for that site, Local Traffic Manager recognizes that the clients destination IP address matches the IP address of the virtual server, and subsequently forwards the client request to one of the content servers that the virtual server load balances.
A network virtual server is a virtual server whose IP address has no bits set in the host portion of the IP address (that is, the host portion of its IP address is 0). There are two kinds of network virtual servers: those that direct client traffic based on a range of destination IP addresses, and those that direct client traffic based on specific destination IP addresses that the BIG-IP system does not recognize.
With an IP address whose host bit is set to 0, a virtual server can direct client connections that are destined for an entire range of IP addresses, rather than for a single destination IP address (as is the case for a host virtual server). Thus, when any client connection targets a destination IP address that is in the network specified by the virtual server IP address, Local Traffic Manager can direct that connection to one or more pools associated with the network virtual server.
For example, the virtual server can direct client traffic that is destined for any of the nodes on the 192.168.1.0 network to a specific load balancing pool such as ingress-firewalls. Or, a virtual server could direct a web connection destined to any address within the subnet 192.168.1.0/24, to the pool default_webservers.
Besides directing client connections that are destined for a specific network or subnet, a network virtual server can also direct client connections that have a specific destination IP address that the virtual server does not recognize, such as a transparent device. This type of network virtual server is known as a wildcard virtual server.
Wildcard virtual servers are a special type of network virtual server designed to manage network traffic that is targeted to transparent network devices. Examples of transparent devices are firewalls, routers, proxy servers, and cache servers. A wildcard virtual server manages network traffic that has a destination IP address unknown to the BIG-IP system.
A host-type of virtual server typically manages traffic for a specific site. When receiving a connection request for that site, Local Traffic Manager forwards the client to one of the content servers that the virtual server load balances.
However, when load balancing transparent nodes, the BIG-IP system might not recognize a clients destination IP address. The client might be connecting to an IP address on the other side of the firewall, router, or proxy server. In this situation, Local Traffic Manager cannot match the clients destination IP address to a virtual server IP address.
Wildcard network virtual servers solve this problem by not translating the incoming IP address at the virtual server level on the BIG-IP system. For example, when Local Traffic Manager does not find a specific virtual server match for a clients destination IP address, Local Traffic Manager matches the clients destination IP address to a wildcard virtual server, designated by an IP address of 0.0.0.0. Local Traffic Manager then forwards the clients packet to one of the firewalls or routers that the wildcard virtual server load balances, which in turn forwards the clients packet to the actual destination IP address.
Default wildcard virtual servers
A default wildcard virtual server is a wildcard virtual server that uses port 0 and handles traffic for all services. A wildcard virtual server is enabled for all VLANs by default. However, you can specifically disable any VLANs that you do not want the default wildcard virtual server to support. Disabling VLANs for the default wildcard virtual server is done by creating a VLAN disabled list. Note that a VLAN disabled list applies to default wildcard virtual servers only. You cannot create a VLAN disabled list for a wildcard virtual server that is associated with one VLAN only.
Port-specific wildcard virtual servers
A port-specific wildcard virtual server handles traffic only for a particular service, and you define it using a service name or a port number. You can use port-specific wildcard virtual servers for tracking statistics for a particular type of network traffic, or for routing outgoing traffic, such as HTTP traffic, directly to a cache server rather than a firewall or router.
If you use both a default wildcard virtual server and port-specific wildcard virtual servers, any traffic that does not match either a standard virtual server or one of the port-specific wildcard virtual servers is handled by the default wildcard virtual server.
We recommend that when you define transparent nodes that need to handle more than one type of service, such as a firewall or a router, you specify an actual port for the node and turn off port translation for the virtual server.
You can define multiple wildcard virtual servers that run simultaneously. Each wildcard virtual server must be assigned to an individual VLAN, and therefore can handle packets for that VLAN only.
In some configurations, you need to set up a wildcard virtual server on one side of the BIG-IP system to load balance connections across transparent devices. You can create another wildcard virtual server on the other side of the BIG-IP system to forward packets to virtual servers receiving connections from the transparent devices and forwarding them to their destination.
A virtual server has a number of properties and settings that you can configure to affect the way that a virtual server manages traffic. You can also assign certain resources to a virtual server, such as a load balancing pool and a persistence profile. Together, these properties, settings, and resources represent the definition of a virtual server, and most have default values. When you create a virtual server, you can either retain the default values or adjust them to suit your needs.
In addition to assigning various traffic profiles to a virtual server, you can also assign a pool, an iRule, and two persistence profiles. The pool, iRule, and persistence profiles that you assign to a virtual server are known as resources.
If you have created a virtual server that is a load balancing type of virtual server, one of the resources you must assign to the virtual server is a default load balancing pool. A default pool is the pool to which Local Traffic Manager sends traffic if no iRule exists specifying a different pool. Note that if you plan on using an iRule to direct traffic to a pool, you must assign the iRule as a resource to the virtual server.
In the Configuration utility, virtual server settings are grouped into three categories: General properties, configuration settings (basic and advanced), and resources (basic and advanced). The following sections describe the settings that these three categories contain.
The type of virtual server you want to create and its IP address. If the type you select is network, then this property also includes the mask for the IP address. This property is required.
The netmask for a network virtual server. This property applies to a network virtual server only, and is required. The netmask clarifies whether the host bit is an actual zero or a wildcard representation.
The state of the virtual server, that is, Enabled or Disabled. As an option, you can enable or disable a virtual server for a specific VLAN. Note that when you disable a virtual server, the virtual server no longer accepts new connection requests. However, it allows current connections to finish processing before going to a down state.
Note: If no VLAN is specified, then the Enabled or Disabled setting applies to all VLANs.
When creating a virtual server, you can configure a number of settings. Table 2.2 lists and describes these virtual server configuration settings. Because all of these settings have default values, you are not required to change these settings.
The type of virtual server configuration. Choices are: Standard, IP Forwarding (IP), Forwarding (Layer 2), Performance (HTTP), Performance (Layer 4), DHCP Relay, and Reject. Note that if set to Reject, this setting causes the BIG-IP system to reject any traffic destined for the virtual server IP address.
The network protocol name for which you want the virtual server to direct traffic. Possible protocol names are TCP, UDP, and SCTP.
One benefit of this feature is that you can load balance virtual private network (VPN) client connections across several VPNs, eliminating the possibility of a single point of failure. A typical use of this feature is for load balancing multiple VPN gateways in an IPSEC VPN sandwich, using non-translating virtual servers.
An important point to note is that although address translation of such protocols can be optionally activated, some protocols, such as IPSEC in AH mode, rely on the IP headers remaining unchanged. In such cases, you should use non-translating network virtual servers.
A setting that designates the selected profile as a client-side profile. Applies to TCP and UDP connections only. When creating a Performance (HTTP) type of virtual server, this value is set to fasthttp, and you cannot change it. Similarly, when creating a Performance (Layer 4) type of virtual server, this value is set to fastl4, and you cannot change it.
A setting that designates the selected profile as a server-side profile. Applies to TCP and UDP connections only. Note that this setting does not appear when creating a Performance (HTTP) or Performance (Layer 4) type of virtual server.
(Use Client Profile)
The name of an existing OneConnectTM profile for managing connection persistence. Note that this setting does not appear when creating a Performance (HTTP) or Performance (Layer 4) type of virtual server.
Important: The way that you configure the Maximum Size setting of the OneConnect profile can affect virtual server availability.
The name of an existing NTLM profile. When used in conjunction with a OneConnect profile, an NTLM profile pools server-side connections for NT Lan Manager (NTLM) traffic.
The name of an existing HTTP profile for managing HTTP traffic. Note that this setting does not appear when creating a Performance (HTTP) or Performance (Layer 4) type of virtual server.
The name of an existing FTP profile for managing FTP traffic. Note that this setting does not appear when creating a Performance (HTTP) or Performance (Layer 4) type of virtual server.
The name of an existing Client SSL profile for managing client-side SSL traffic. Note that this setting does not appear when creating a Performance (HTTP) or Performance (Layer 4) type of virtual server. You can assign a Client SSL profile for both TCP and UDP traffic.
The name of an existing SSL profile for managing server-side SSL traffic. Note that this setting does not appear when creating a Performance (HTTP) or Performance (Layer 4) type of virtual server.
The name of an existing authentication profile for managing an authentication mechanism. Examples are a remote LDAP or RADIUS server. Note that this setting does not appear when creating a Performance (HTTP) or Performance (Layer 4) type of virtual server..
The name of an existing Stream profile for searching and replacing strings within a data stream, such as a TCP connection. Note that this setting does not appear when creating a Performance (HTTP) or Performance (Layer 4) type of virtual server.
The name of an existing RTSP profile. Real Time Streaming Protocol (RTSP) is a protocol used for streaming-media presentations. Using RTSP, a client system can control a remote streaming-media server and allow time-based access to files on a server.
The name of an existing SIP profile for managing SIP traffic. Note that the SIP Profile option is only available with a Standard type virtual server.
A list of the traffic classes you would like to assign to the virtual server. Any traffic flows that match the criteria defined in a traffic class are tagged with a classification ID.
The maximum number of concurrent connections allowed for the virtual server. Setting this to 0 turns off connection limits.
A setting that mirrors connections from the active unit to the standby unit of a redundant pair. This setting provides higher reliability, but might affect system performance.
Important: To ensure that a standby unit retains its mirrored connections after a reboot operation, we recommend that you enable connection mirroring on Performance (Layer 4) virtual servers only.)For more information, see the description in this table of the Type setting, as well as What is a virtual server?.) We also recommend that you set up a direct link (trunk) between the peer units as a way to dedicate bandwidth for mirroring the connections. This prevents potential performance problems or loss of mirrored information.
Disabled (unchecked)
A setting to enable or disable address translation on a BIG-IP system. This option is useful when the BIG-IP system is load balancing devices that have the same IP address. This is typical with the nPath routing configuration where duplicate IP addresses are configured on the loopback device of several servers.
Enabled
(checked)
A setting to enable or disable port translation on a BIG-IP system. Turning off port translation for a virtual server is useful if you want to use the virtual server to load balance connections to any service.
Enabled
(checked)
Preserve: Specifies that the system preserves the value configured for the source port, unless the source port from a particular SNAT is already in use, in which case the system uses a different port.
Preserve Strict: Specifies that the system preserves the value configured for the source port. If the port is in use, the system does not process the connection. If the port is in use by another connection, the system uses that source port anyway, and the destination server cannot distinguish the traffic of the connections sharing that source port. F5 Networks recommends that you restrict use of this setting to cases that meet at least one of the following conditions:
The system is configured for nPath routing or is running in transparent mode (that is, there is no translation of any other Layer 3 or Layer 4 field).
There is a one-to-one relationship between virtual IP addresses and node addresses, or clustered multi-processing (CMP) is disabled.
Change: Specifies that the system changes the source port. This setting is useful for obfuscating internal network addresses.
Assigns an existing SNAT pool to the virtual server, or enables the Automap feature. When you use this setting, the BIG-IP system automatically maps all original source IP addresses passing through the virtual server to an address in the SNAT pool. Possible values are: None, Auto Map, or the name of an existing SNAT pool.
Used for intrusion detection, this feature causes the virtual server to replicate client-side traffic (prior to address translation), to a member of the specified clone pool. A clone pool receives all of the same traffic as the normal pool. You therefore use clone pools to copy traffic to intrusion detection systems.
You can also configure the Clone Pool (Server) setting.
Used for intrusion detection, this feature that causes the virtual server to replicate server-side traffic (after address translation), to a member of the specified clone pool. A clone pool receives all of the same traffic as the normal pool. You therefore use clone pools to copy traffic to intrusion detection systems.
You can also configure the Clone Pool (Client) setting.
Used when the BIG-IP system should direct reply traffic to any router in a pool of routers, instead of processing the traffic in the normal way (such as through the auto_lasthop setting or the routing table). The Last Hop Pool setting might be useful if you want to exclude a set of routers from auto_lasthop behavior, for example.
Note that by default, this setting only applies to reply traffic for traffic that comes from the last hop pool. For reply traffic for traffic that is not from the last hop pool, the BIG-IP system processes it in the normal way.
You can change this behavior by using the tmsh command db to configure the variable TM.LHPNoMemberAction. Before configuring a last hop pool on a virtual server, you must first create the pool, which should contain the router inside addresses.
Disabled
(Unchecked)
Table 2.3 lists and describes the specific resources that you can assign to a load balancing virtual server.
A list of existing iRules that you want the virtual server to apply to the data channel of FTP or RTSP traffic. This setting only appears when you have assigned an FTP or RTSP profile to the virtual server.
A list of existing iRules that you want the virtual server to use when load balancing its connections. Note that for all iRules that you select, you must configure a corresponding profile on the virtual server. For example, if you are specifying an iRule that includes HTTP commands, you must configure a default or custom HTTP profile on the virtual server. Similarly, if you are implementing an authentication iRule, you must configure a default or custom authentication profile.
If the iRule you want to implement does not appear in the iRules list, the iRule does not exist and you must first create it. If the iRules setting does not appear on the New Virtual Server screen, check your licensing.
The pool name that you would like the virtual server to use as the default pool. A load balancing virtual server sends traffic to this pool automatically, unless an iRule directs the server to send the traffic to another pool instead.
The type of persistence that you want the BIG-IP system to use. This setting is available for Standard, Performance (HTTP), and Performance (Layer 4) types of virtual servers only.
The type of persistence that the BIG-IP system should use if the system cannot use the specified default persistence. Valid types of persistence profiles for this setting are Source Address Affinity profiles and Destination Address Affinity profiles. You can also specify None. This setting is available for Standard, Performance (HTTP), and Performance (Layer 4) types of virtual servers only.
A virtual server address has a number of properties and settings that you can configure to affect the way that a virtual server manages traffic. Table 2.4 lists and describes the configuration settings of a virtual address.
traffic-group-1 or
traffic-group-local-only
The virtual-server conditions for which the BIG-IP system should advertise this virtual address to an advanced routing module. This setting only applies when the Route Advertisement setting is enabled (checked). Possible values are:
A setting that enables or disables ARP requests for the virtual address. When disabled, the BIG-IP system ignores ARP requests that other routers send for this virtual address.
A setting that inserts a route to this virtual address into the kernel routing table so that an advanced routing module can redistribute that route to other routers on the network.
To access virtual address settings, log in to the BIG-IP Configuration utility, and on the Main tab, expand Local Traffic, and click Virtual Servers. Then on the menu bar, click Virtual Address List, and in the Name column, click the name of a virtual address.
At any time, you can determine the status of a virtual server or virtual address, using the Configuration utility. You can find this information by displaying the list of virtual servers or virtual addresses and viewing the Status column, or by viewing the Availability property of the object.
The shape of the icon indicates the status that the monitor has reported for that node.
The color of the icon indicates the actual status of the node.
Table 2.5 Explanation of status icons for virtual servers and virtual addresses
Status indicator
The virtual server or virtual address is enabled but is currently unavailable. However, the virtual server or virtual address might become available later, with no user action required.
An example of a virtual server or virtual address showing this status is when the objects connection limit has been exceeded. When the number of connections falls below the configured limit, the virtual server or virtual address becomes available again.
The virtual server or virtual address is enabled but offline because an associated object has marked the virtual server or virtual address as unavailable. To change the status so that the virtual server or virtual address can receive traffic, you must actively enable the virtual server or virtual address.
The virtual server or virtual address is operational but set to Disabled. To resume normal operation, you must manually enable the virtual server or virtual address.
The BIG-IP system includes a performance feature known as clustered multi-processing, or CMP. CMP is a traffic acceleration feature that creates a separate instance of the Traffic Management Microkernel (TMM) service for each central processing unit (CPU) on the system. When CMP is enabled, the workload is shared equally among all CPUs.
Whenever you create a virtual server, the BIG-IP system automatically enables the CMP feature. When CMP is enabled, all instances of the TMM service process application traffic.
When you view standard performance graphs using the Configuration utility, you can see multiple instances of the TMM service (tmm0, tmm1, and so on).
While displaying some statistics individually for each TMM instance, the BIG-IP system displays other statistics as the combined total of all TMM instances.
Note: We recommend that you disable the CMP feature if you set a small connection limit on pool members (for example, a connection limit of 2 for the 8400 platform or 4 for the 8800 platform).
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)