Applies To:

Show Versions Show Versions

Archived Manual Chapter: BIG-IP Administrator guide v3.3: A simple intranet configuration
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

This article has been archived, and is no longer maintained.



3

A Simple Intranet Configuration



A simple intranet configuration

After you have completed the most basic configuration options for the BIG-IP Controller (see Determing which configuration tasks to do on page 18-1), you may want to review this sample configuration that uses the BIG-IP Controller for traffic management. The following example describes a simple intranet load balancing configuration.

The following sections provide you with a basic intranet configuration that can help you plan your installation. This example can also help you understand how people use some of the most popular BIG-IP Controller features to resolve specific issues or to enhance network performance in general.

This example is a configuration that might be found in a large corporate intranet. In this scenario, the BIG-IP Controller performs load balancing for two different types of connection requests:

  • Connections to the company's intranet web site
    The load balancing for the company's intranet web site is similar to basic Internet web site load balancing. The BIG-IP Controller simply load balances the two web servers that host the company intranet web site.
  • Connections to hosts on the Internet
    In this example, the BIG-IP Controller provides load balancing for connections bound for the Internet. However, the example shows a somewhat sophisticated setup where the BIG-IP Controller actually intercepts HTTP traffic and directs it to a special cache server. Only clients using protocols other than HTTP, such as FTP or SMTP email, get load balanced to one of the two firewalls that lead to the Internet. This greatly reduces the number of concurrent connections that the firewalls have to maintain. Clients looking to retrieve web content get the content from the cache server itself, instead of the actual web site host. If the cache server does not have the content that the client is looking for, the cache server retrieves the content from the real web site on behalf of the client and then forwards it to the client.

Setting up the topology

To set up load balancing for this intranet example, you need to create three pools that are referenced by three virtual servers: one that handles load balancing for the internal corporate web site, one that directs outbound HTTP traffic to the cache server, and one that handles load balancing for the firewalls.

Figure 3.1 shows the topology for the sample configuration. A standard virtual server handles the load balancing for the corporate intranet web site, Corporate.main.net. Wildcard Virtual Server 1 takes all of the outbound HTTP traffic and directs it to the cache server. Wildcard Virtual Server 2 handles all of the remaining traffic that actually has to go out to the Internet.

Figure 3.1 A basic intranet configuration

The wildcard virtual servers are a special type of virtual server, which accept traffic going to IP addresses unknown to the BIG-IP Controller, as all outside Internet addresses would be. When the BIG-IP Controller receives a connection request, it immediately tries to match the requested IP address to one of its virtual server IP addresses. If it cannot find a match among the standard virtual servers that it manages, it then looks for a wildcard virtual server. Wildcard virtual servers provide the default IP address of 0.0.0.0 that the BIG-IP Controller can use as a sort of catch-all IP address match.

This example contains three types of virtual servers:

  • Standard virtual server
    The standard virtual server references the http_pool that contains two members: 192.168.100.10:80 and 192.168.100.11:80.
  • Port-specific wildcard virtual servers
    A port-specific wildcard virtual server uses the default IP address, but it has a specific port number, and it only handles traffic associated with that port number. In the preceding example, the port-specific wildcard virtual server captures all outbound traffic that uses port 80 and directs it to the cache server. The port-specific wildcard virtual server references the specificport_pool that contains one member: 192.168.100.20:80.
  • Default wildcard virtual servers
    A default wildcard virtual server is one that uses only port 0. Port 0, like the 0.0.0.0 IP address, is a catch-all match for outgoing traffic that does not match any standard virtual server or any port-specific wildcard virtual server. Default wildcard virtual servers typically handle traffic only for firewalls or routers. In the preceding example, the default wildcard virtual server load balances the intranet's firewalls that connect to the Internet. The default wildcard virtual server references the defaultwild_pool that contains two members: 192.168.100.30:80 and 192.168.100.31:80.

Using additional features

In this type of configuration, you might want to take advantage additional BIG-IP Controller features that are described both in this guide and in the BIG-IP Controller Reference Guide. These features include:

  • State mirroring
    This feature is available only for redundant BIG-IP Controller systems, and it greatly enhances the reliability of your network. A redundant system runs two BIG-IP Controllers at the same time. One unit actively handles all connection requests, and the other unit acts as a standby that immediately takes over if the active unit fails and reboots. The state mirroring feature allows the standby unit to maintain all of the current connection and persistence information. If the active unit fails and the standby unit takes over, all connections continue, virtually uninterrupted. This is especially useful for long-lived connections, such as FTP connections which would otherwise have to re-establish an entire transfer session.
  • Destination address affinity
    Allows the BIG-IP Controller to cache content on specified cache servers by sending all requests for the same server to the same node. This avoids caching the same content on multiple cache servers. Because the above example includes only one cache server, you would not actually implement this feature in that example. However, the destination address affinity feature is very useful for users who work with multiple cache servers in a similar intranet scenario. Caching specific information on the same cache server saves disk space on your cache servers.
  • IP address filtering
    Allows you to deny connections going to or coming from specific IP addresses. This feature is useful if you are experiencing denial-of-service attacks from hostile sources. You can set up an IP filter to ignore traffic coming in from the hostile IP address.
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)