Manual Chapter : BIG-IP Administrator guide v4.0: BIG-IP Controller Overview

Applies To:

Show Versions Show Versions

BIG-IP versions 1.x - 4.x

  • 4.0 PTF-04, 4.0 PTF-03, 4.0 PTF-02, 4.0 PTF-01, 4.0.0
Manual Chapter


1

BIG-IP Controller Overview



Introduction

The BIG-IP Controller is an Internet appliance used to implement a wide variety of load balancing and other network traffic solutions, including intelligent cache content determination and SSL acceleration. The subsequent chapters in this guide each outline a solution or solutions and provide configuration instructions for those solutions. The purpose of this overview is to introduce you to you to the BIG-IP Controller, its user interfaces, and the range of solutions possible. The following topics are included:

  • User interface
  • A basic configuration
  • Configuring objects and properties
  • Load balancing modes
  • Making hidden nodes accessible
  • The external VLAN and outbound load balancing
  • BIG-IP Controllers and intranets
  • Cache control
  • SSL acceleration
  • Content conversion
  • VLANs
  • Link aggregation and failover
  • Configuring redundant BIG-IP Controller pairs
  • Monitoring and administration

User interface

User interface to the BIG-IP Controller consists primarily of the web-based Configuration utility and the command interface bigpipe. The Configuration utility is contained in the controller's internal Web server. You may can access it through the administrative interface on the BIG-IP Controller using Netscape Navigator version 4.7, or Microsoft Internet Explorer version 5.0, or later. (Netscape Navigator version 6.0 is not supported.)

Figure 1.1 shows the Configuration utility as if first appears, displaying the top-level (System) screen with your existing load-balancing configuration. The Configuration utility provides an instant overview of your network as it is currently configured.

Figure 1.1 Configuration utility System screen

The left pane of the screen, referred to as the navigation pane, contains hyper-links to screens for the main configuration objects that you will create and tailor for your network: Virtual Servers, Nodes, Pools, Rules, NATs, Proxies, Network, Filters, and Monitors. These screens will appear in the right pane. The left pane of the screen also contains hyper-links to screens for monitoring and system administration (Statistics, Log Files, and System Admin).

A basic configuration

As suggested in the previous section, the System screen shows the objects that are currently configured for the system. These consist of virtual servers, nodes, and a load-balancing pool. What these objects represent is shown in Figure 1.2, a very basic configuration.

Figure 1.2 A basic configuration

In this configuration, the controller sits between a router and an array of content servers, and load balances inbound Internet traffic across those servers.

Insertion of the BIG-IP Controller, with its standard two interfaces, divides the network into an external VLAN and an internal VLAN. (However, both VLANs can be on a single IP network, so that inserting the BIG-IP Controller does not require you to change the IP addressing of the network.) The nodes on the external VLAN are routable. The nodes on the internal VLAN, however, are hidden behind the BIG-IP Controller. What will appear in their place is a user-defined virtual server. It is this virtual server that receives requests and distributes them among the physical servers, which are now members of a load-balancing pool.

The key to load balancing through a virtual server is address translation, and the setting of the BIG-IP Controller address as the default route. By default, the virtual server translates the destination address of the incoming packet to that of the server it load balances to, making it the source address of the reply packet. The reply packet returns to the BIG-IP Controller as the default route, and the controller translates its source address back to that of the virtual server.

Configuring objects and object properties

Abstract entities like virtual servers and load balancing pools are called configuration objects, and the options associated with them, like load balancing mode, are called object properties. The basic configuration shown in Figure 1.2 contains three types of objects: node, pool, and virtual server. You can create these objects by clicking the object type in the left pane of the Configuration utility. For example, the pool was created by clicking Pools to open the Pools screen, then clicking the Add (+) button to open the Add Pool screen, shown in Figure 1.3.

Figure 1.3 Add Pool screen

The same pool would be configured at the BIG-IP Controller command line using bigpipe as follows:

b pool my_pool { member 11.12.11.210:80 member 11.12.11.21:80 member 11.12.11.22:80 }

Either configuration method results in the entry in Figure 1.4 being placed in the file /config/bigip.conf on the controller. You can also edit this file directly using a text editor like vi or pico.

Figure 1.4 Pool definition in bigip.conf

 pool my_pool {     
member 11.12.11.20:80
member 11.12.11.21:80
member 11.12.11.22:80
}

For a complete description of the configuration objects and properties, refer to the BIG-IP Reference Guide, Chapter 1, Configuring the BIG-IP Controller.

Load balancing modes

Load balancing is the distribution of network traffic across servers that are elements in the load balancing pool. The user may select from a range of load balancing methods, or modes. The simplest mode is round robin, in which servers are addressed in a set order and the next request always goes to the next server in the order. Other load balancing modes include ratio, dynamic ratio, fastest, least connections, observed, and predictive.

  • In ratio mode, connections are distributed based on weight attribute values that represent load capacity.
  • In dynamic ratio mode, the system is configured to read ratio weights determined by the lowest measured response time from external software.
  • In fastest mode, the first server to respond is picked. In least connections mode, the least busy server is picked.
  • Observed and predictive modes are combinations of the simpler modes.

    For a complete description of the load balancing modes, refer to Pools in the BIG-IP Reference Guide, Chapter 1, Configuring the BIG-IP Controller.

BIG-IP Controllers and intranets

So far, discussion has been limited to load balancing incoming traffic to the internal VLAN. The BIG-IP Controller can also load balance outbound traffic across routers or firewalls on the external VLAN. This creates the intranet configuration shown in Figure 1.5, which load balances traffic from intranet clients to local servers, to a local cache, or to the Internet.

Figure 1.5 A basic intranet configuration

This solution utilizes two wildcard virtual servers: Wildcard Virtual Server1, which is HTTP port specific, and Wildcard Virtual Server2, which is not port specific. This way, all non-HTTP requests to addresses not on the intranet are directed to the cache server, which will provide the resources if cached, and otherwise will access them directly from the Internet. All non-HTTP requests not on the intranet will be directed to the Internet.

For detailed information on this solution, refer to Chapter 4, A Simple Intranet Configuration.

Bidirectional load balancing

The intranet configuration shown in Figure 1.5 would typically be a part of larger configuration supporting inbound and outbound traffic.

Figure 1.6 shows traffic being load balanced bidirectionally across three firewalls.

Figure 1.6 Load balancing firewalls

This configuration requires two BIG-IP Controllers (or controller redundant pairs), and the creation of three load balancing pools with corresponding virtual servers. A virtual server on the inside BIG-IP Controller (BIG-IP Controller1 in Figure 1.6) load balances incoming requests across the enterprise servers. A virtual server on the outside BIG-IP Controller (BIG-IP Controller2 in Figure 1.6) load balances incoming requests across the external interfaces of the firewalls. A third virtual server on the inside BIG-IP Controller redundant system load balances outbound requests across the internal interfaces of the firewalls.

For detailed information on this solution, refer to Chapter 9, Balancing Two-Way Traffic Across Firewalls.

Cache control

Using cache control features, you can create rules to distribute content among three server pools, an origin server pool, a cache pool for cachable content, and a hot pool. The origin pool members contain all content. The cache pool members contain content that is considered cachable (for example all HTTP and all GIF content). The hot pool members contain cachable content that is considered hot, that is, frequently accessed, as determined by a threshold you set. Once identified, hot content is distributed and load balanced across the pool to maximize processing power when it is hot, and localized to the hot pool when it is cool (less frequently accessed).

A special cache feature is destination address affinity (also called sticky persistence). This feature directs requests for a certain destination to the same proxy server, regardless of which client the request comes from. This saves the other proxies from having to duplicate the web page in their caches, wasting memory.

For detailed information about cache rules, refer to Rules in the BIG-IP Reference Guide, Chapter 1, Configuring the BIG-IP Controller.

SSL acceleration

SSL acceleration uses special software with an accelerator card to speed the encryption and decryption of encoded content. This greatly speeds the flow of HTTPS traffic without affecting the flow of non-HTTPS traffic. In addition, using add-on BIG-IP e-Commerce Controllers, it is possible to create a scalable configuration that can grow with your network.

For detailed information about SSL acceleration, refer to Chapter 8, Configuring an SSL Accelerator.

Content conversion

Content conversion is the on-the-fly switching of URLs to ARLs (Akamai Resource Locators) for web resources that are stored geographically nearby on the Akamai Freeflow NetworkTM. This greatly speeds download of large, slow-to-load graphics and other types of objects.

For detailed information about content conversion, refer to Chapter 13, Configuring a Content Converter.

VLANs

The internal and external VLANs created on the BIG-IP Controller are by default the separate port-specified VLANs external and internal, with the BIG-IP Controller functioning as an L2 switch. In conformance with IEE802.lq, the BIG-IP Controller supports both port-specified VLAN and tagged VLANs. This adds the efficiency and flexibility of VLAN segmentation to traffic handling between the networks. For example, with VLANs it is no longer necessary to change any IP addresses after inserting a BIG-IP Controller into a single network.

VLAN capability also supports multi-site hosting and allows the BIG-IP Controller to fit into and extend a pre-existing VLAN segmentation, or to serve as a VLAN switch in creating a VLAN segmentation for the wider network.

For detailed information on VLANs, refer to VLANs in the BIG-IP Reference Guide, Chapter 1, Configuring the BIG-IP Controller.

Link aggregation and link failover

Links (individual physical interfaces) on the BIG-IP Controller may be aggregated by software means to form a trunk (an aggregation of links). This link aggregation increases the bandwidth of the individual links in an additive manner. Thus four fast Ethernet links, if aggregated, create a single 400 Mb/s link. Link aggregation is highly useful with asymmetric loads. Another advantage of link aggregation is link failover. If one link in an trunk goes down, traffic is simply redistributed over the remaining links. Link aggregation conforms to IEEE 802.3ad.

Configuring redundant BIG-IP Controller pairs

BIG-IP Controllers may be configured in redundant pairs, with one unit active and the other in standby mode. This is made convenient by the fact that once one unit has been configured, this configuration can be copied automatically to the other unit, a process called synchronization. Once the systems have been synchronized, a failure detection system determines whether the active unit is in failure mode and automatically re-directs traffic to standby unit. This process is called failover.

A special feature of redundant pairs is optional state mirroring. When you use the mirroring feature, the standby controller maintains the same state information as the active controller. Transactions such as FTP file transfers continue as though uninterrupted if the standby controller becomes active.

For detailed information about configuring redundant pairs, refer to Redundant Systems in the BIG-IP Reference Guide, Chapter 1, Configuring the BIG-IP Controller.

Making hidden nodes accessible

To perform load balancing, the BIG-IP Controller hides physical servers behind a virtual server. This prevents them from receiving direct administrative connections or from initiating requests as clients (for example, to download software upgrades.) There are two basic methods for making nodes on the internal VLAN routable to the outside world: forwarding and address translation.

Forwarding

Forwarding is the simple exposure of a node's IP address to the BIG-IP Controller's external VLAN so that clients can use it as a standard routable address. There are two types of forwarding, IP forwarding and the forwarding virtual server. IP forwarding exposes all nodes and all ports on the internal VLAN. You can use the IP filter feature to implement a layer of security.

A forwarding virtual server is like IP forwarding but exposes only selected servers and/or ports.

Address translation

Address translation consists of providing a routable alias that a node can use as its source address when acting as a client. There are two types of address translation: NAT (Network Address Translation) and SNAT (Secure Network Address Translation). NATs are assigned one per node and can be used for both outbound and inbound connections. SNATs may be assigned to multiple nodes and permit only outbound connections, hence the greater security.

For detailed information about address translation, refer to the sections NATs, SNATs, and IP Forwarding in the BIG-IP Reference Guide, Chapter 1, Configuring the BIG-IP Controller.

Monitoring and administration

The BIG-IP Controller provides two types of monitoring, health monitoring and statistical monitoring.

Health monitors

Health monitoring is the automatic periodic checking of all nodes in load balancing pools to determine if the node is fully functional. A node that fails its health check is marked down and traffic is no longer directed to it. The controller offers ECV (Extended Content Verification) and EAV (Extended Application Verification) monitors covering all the standard protocols. All monitors are user-configurable and a special external monitor is included for user-supplied pingers.

For detailed information about health monitors, refer to the BIG-IP Reference Guide, Chapter 1, Configuring the BIG-IP Controller.

Statistical monitoring

The BIG-IP Controller provides multiple windows into its operation, including the Configuration utility, bigpipe, and utilities for logging and the display of statistics on specific objects. For example, one utility, Big/stat, allows you monitor statistics specific to virtual servers and nodes, such as the number of current connections or the number of packets processed since the last reboot. In addition, the BIG-IP Controller has simple network management protocol (SNMP) and management information bases (MIBs) to allow you to configure traps or poll the controller with your standard network management station (NMS).

For detailed information on monitoring and administration features and utilities, refer to Chapter 18, Monitoring and Administration.