Manual Chapter : BIG-IP Cache Controller guide v3.3: Configuring Forward Proxy Caching

Applies To:

Show Versions Show Versions

BIG-IP versions 1.x - 4.x

  • 3.3.1 PTF-06, 3.3.1 PTF-05, 3.3.1 PTF-04, 3.3.1 PTF-03, 3.3.1 PTF-02, 3.3.1 PTF-01, 3.3.1, 3.3.0
Manual Chapter


4

Configuring Forward Proxy Caching



Introducing forward proxy caching

This chapter explains how to set up a forward proxy caching configuration, in which a BIG-IP Cache Controller redundant system uses content-aware traffic direction to enhance the efficiency of an array of cache servers storing Internet content for internal users. This type of configuration is useful for any enterprise that wants to increase the speed with which its users receive content requests from the Internet.

The configuration detailed in this chapter uses the following BIG-IP Cache Controller features:

  • Cacheable content determination
    Cacheable content determination enables you to determine the type of content you cache on the basis of any combination of elements in the header of an HTTP request.
  • Content affinity
    Content affinity ensures that a given subset of content remains associated with a given cache to the maximum extent possible, even when cache servers become unavailable, or are added or removed. This feature also maximizes efficient use of cache memory.
  • Hot content load balancing
    Hot content load balancing identifies hot, or frequently requested, content on the basis of number of requests in a given time period for a given hot content subset. A hot content subset is different from, and typically smaller than, the content subsets used for content striping. Requests for hot content are redirected to a cache server in the hot pool, a designated group of cache servers. This feature maximizes the use of cache server processing power without significantly affecting the memory efficiency gained by content affinity.

Maximizing memory or processing power

From the time you implement a cache control rule until such time as a hot content subset becomes hot, the content is divided across your cache servers, so that no two cache servers contain the same content. In this way, efficient use of the cache servers' memory is maximized.

After a hot content subset becomes hot, requests for any content contained in that subset are load balanced, so that, ultimately, each cache server contains a copy of the hot content. The BIG-IP Cache Controller distributes requests for the hot content among the cache servers. In this way, efficient use of the cache servers' processing power is maximized.

Thus, for a particular content item, the BIG-IP Cache Controller maximizes either cache server memory (when the content is cool) or cache server processing power (when the content is hot), but not both at the same time. The fact that content is requested with greatly varying frequency enables the cache statement rule to evaluate and select the appropriate attribute to maximize for a given content subset.

Using the configuration diagram

Figure 4.1, following, illustrates a forward proxy caching configuration, and provides an example configuration for this entire chapter. Remember that this is just a sample: when creating your own configuration, you must use IP addresses, host names, and so on, that are applicable to your own network.

Figure 4.1 Caching Internet content

Configuration tasks

To configure forward proxy caching, complete the following tasks in order:

  • Create pools
  • Create a cache control rule
  • Create a virtual server

    Each of the following sections explains one of these tasks, and shows how you perform the tasks in order to implement the configuration shown in Figure 4.1. Note that in this example, as in all examples in this guide, we use only non-routable IP addresses. In a real topology, the appropriate IP addresses have to be routable on the Internet.

Creating pools

For this configuration, you create load balancing pools for your origin server (in this configuration, the origin server is the router that provides access to the Internet), for your cache servers, and for your hot, or frequently requested, content servers, which may or may not be cache servers. A pool is a group of devices to which you want the BIG-IP Cache Controller redundant system to direct traffic. For more information about pools, refer to Configuring a pool, on page 5-5.

You create three pools:

  • Cache server pool
    The BIG-IP Cache Controller directs all cacheable requests bound for your web server to this pool, unless a request is for hot content.
  • Origin server pool
    This pool includes your origin web server. Requests are directed to this pool when:
    • The request is for non-cacheable content; that is, content that is not identified in the cacheable content expression part of a cache rule statement. For more information, see Cacheable content expression, on page 4-8.
    • The request is from a cache server that does not yet contain the requested content, and no other cache server yet contains the requested content.
    • No cache server in the cache pool is available.
  • Hot cache servers pool
    If a request is for frequently requested content, the BIG-IP Cache Controller redundant system directs the request to this pool.

    Note: While the configuration shown in Figure 4.1 implements a hot cache servers pool, this pool is not required if you want to use the content determination and content affinity features. However, you must implement this pool if you want to use the hot content load balancing feature.

Creating a pool for the cache servers

First, create a pool for the cache servers. Use either the Configuration utility or the command line to create this pool.

To create a pool using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the toolbar, click the Add Pool button.
    The Add Pool screen opens.
  3. In the Add Pool screen, configure attributes required for the cache servers you want to add to the pool.
    For additional information about configuring a pool, click the Help button.

    Configuration notes
    To create the configuration shown in Figure 4.1:

    · Create a pool named cache_servers.

    · Add each cache server from the example, 10.10.20.4, 10.10.20.5, and 10.10.20.6, to the pool. For each cache server you add to the pool, specify port 80, which means this cache server accepts traffic for the HTTP service only.

To create a pool from the command line

To define a pool from the command line, use the following syntax:

bigpipe pool <pool_name> { lb_method <lb_method> member <member_definition> ... member <member_definition> }

To implement the configuration shown in Figure 4.18, you use the command:

bigpipe pool cache_servers { lb_method round_robin member 10.10.20.4:80 member 10.10.20.5:80 member 10.10.20.6:80 }

Creating a pool for the origin server

Next, create a pool for your origin server. In this configuration, the origin server is the router between the cache servers and the Internet. Use either the Configuration utility or the bigpipe pool command, as you did to create the pool for the cache servers.

To create a pool using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the toolbar, click the Add Pool button.
    The Add Pool screen opens.
  3. In the Add Pool screen, configure the attributes required for the cache servers you want to add to the pool.
    For additional information about configuring a pool, click the Help button.

    Configuration notes
    To create the configuration shown in Figure 4.1:

    · Create a pool named origin_server.

    · Add the origin server from the example, the router 10.10.20.254, to the pool. Specify port 80, which means the origin server accepts traffic for the HTTP service only.

To create a pool from the command line

To define a pool from the command line, use the following syntax:

bigpipe pool <pool_name> { lb_method <lb_method> member <member_definition> ... member <member_definition> }

To implement the configuration shown in Figure 4.1, you use the command:

bigpipe pool origin_server { lb_method round_robin member 10.10.20.254:80 }

Creating a pool for hot content

Finally, create a pool for hot content. You can use either the Configuration utility or the command line to create this pool, as in the previous sections.

To create a pool using the Configuration utility

  1. In the navigation pane, click Pools.
    The Pools screen opens.
  2. In the toolbar, click the Add Pool button.
    The Add Pool screen opens.
  3. In the Add Pool screen, configure the attributes required for the cache servers you want to add to the pool.
    For additional information about configuring a pool, click the Help button.

    Configuration notes
    To create the configuration shown in Figure 4.1:

    · Create a pool named hot_cache_servers.

    · Add each cache server from the example, 10.10.20.4, 10.10.20.5, and 10.10.20.6, to the pool. For each cache server you add to the pool, specify port 80, which means this cache server accepts traffic for the HTTP service only.

To create a pool from the command line

To define a pool from the command line, use the following syntax:

bigpipe pool <pool_name> { lb_method <lb_method> member <member_definition> ... member <member_definition> }

To implement the configuration shown in Figure 4.1, use the command:

bigpipe pool hot_cache_servers { lb_method round_robin member 10.10.20.4:80 member 10.10.20.5:80 member 10.10.20.6:80 }

Creating a cache control rule

A cache control rule is a specific type of rule. A rule establishes criteria by which a BIG-IP Cache Controller directs traffic. A cache control rule determines where and how the BIG-IP Cache Controller redundant system directs content requests in order to maximize the efficiency of your cache server array and of your origin web server.

A cache control rule includes a cache statement, which is composed of a cacheable content expression and two attributes. An attribute is a variable that the cache statement uses to direct requests. It can also include several optional attributes.

A cache statement may be either the only statement in a rule, or it may be nested in a rule within an if statement.

Cacheable content expression

The cacheable content expression determines whether the BIG-IP Cache Controller redundant system directs a given request to the cache server or to the origin server, based on evaluating variables in the HTTP header of the request.

Any content that does not meet the criteria in the cacheable content expression is deemed non-cacheable.

For example, in the configuration illustrated in this chapter, the cacheable content expression includes content having the file extension .html or .gif. The BIG-IP Cache Controller redundant system considers any request for content having a file extension other than .html or .gif to be non-cacheable, and sends such requests directly to the origin server.

For your configuration, you may want to cache any content that is not dynamically generated.

Required attributes

The cache control rule must include the following attributes:

  • origin_pool
    Specifies a pool of servers that contain original copies of all content. Requests are load balanced to this pool when any of the following are true:
    • The requested content does not meet the criteria in the cacheable content condition.
    • No cache server is available.
    • The BIG-IP Cache Controller redundant system is redirecting a request from a cache server that did not have the requested content.
  • cache_pool
    Specifies a pool of cache servers to which requests are directed in a manner that optimizes cache performance.

Optional attributes

The attributes in this section apply only if you are using the hot content load balancing feature.

  • hot_pool
    Specifies a pool of cache servers to which requests are load balanced when the requested content is hot.

    The hot_pool attribute is required if any of the following attributes is specified:
  • hot_threshold
    Specifies the minimum number of requests for content in a given hot content set that causes the content set to change from cool to hot at the end of the period.
    If you specify a value for hot_pool, but do not specify a value for this variable, the cache statement uses a default hot threshold of 100 requests.
  • cool_threshold
    Specifies the maximum number of requests for content in a given hot content set that causes the content set to change from hot to cool at the end of the hit period.
    If you specify a variable for hot_pool, but do not specify a value for this variable, the cache statement uses a default cool threshold of 10 requests.
  • hit_period
    Specifies the period in seconds over which to count requests for particular content before determining whether to change the content demand status (hot or cool) of the content.
    If you specify a value for hot_pool, but do not specify a value for this variable, the cache statement uses a default hit period of 60 seconds.
  • content_hash_size
    Specifies the number of units, or hot content subsets, into which the content is divided when determining whether content demand status is hot or cool. The requests for all content in a given subset are summed, and a content demand status (hot or cool) is assigned to each subset. The content_hash_size should be within the same order of magnitude as the actual number of requests possible. For example, if the entire site is composed of 500,000 pieces of content, a content_hash_size of 100,000 would be typical.
    If you specify a value for hot_pool, but do not specify a value for this variable, the cache statement uses a default hash size of 1028 subsets.

Content demand status

Content demand status is a measure of the frequency with which a given hot content subset is requested. Content demand status, which is either hot or cool, is applicable only when using the hot content load balancing feature. For a given hot content subset, content demand status is cool from the time the cache control rule is implemented until the number of requests for the subset exceeds the hot_threshold during a hit_period. At this point content demand status for the subset becomes hot, and requests for any item in the subset are load balanced to the hot_pool. Content demand status remains hot until the number of requests for the subset falls below the cool_threshold during a hit_period, at which point the content demand status becomes cool. The BIG-IP Cache Controller directs requests for any item in the subset to the appropriate server in the cache_pool until such time as the subset becomes hot again.

To create a cache statement rule using the Configuration utility

  1. In the navigation pane, click Rules.
    The Rules screen opens.
  2. In the toolbar, click the Add Rule button.
    The Add Rule screen opens.
  3. In the Add Rule screen, type the cache statement.
    For example, given the configuration shown in Figure 4.1, to cache all content having either the file extension .html or .gif, you would type:

    rule cache_rule { cache ( http_uri ends_with "html" or http_uri ends_with "gif" ) { origin_pool origin_server cache_pool cache_servers hot_pool hot_cache_servers } }

  4. Click the Add button.

To create a cache control rule from the command line

To create a cache statement rule from the command line, use the following syntax:

bigpipe 'rule <rule_name> { cache ( <condition> ) { origin_pool <origin_pool_name> cache_pool <cache_pool_name> hot_pool <hot_pool_name> hot_threshold <hot_threshold_value> cool_threshold <cool_threshold_value> hit_period <hit_period_value> content_hash_size <content_hash_size_value> } }'

For example, given the configuration shown in Figure 4.1, to cache all content having the file extension .html or .gif, you would use the bigpipe command:

bigpipe 'rule cache_rule { cache ( http_uri ends_with "html" or http_uri ends_with "gif" ) { origin_pool origin_server cache_pool cache_servers hot_pool hot_cache_servers } }'

Creating a virtual server

Now that you have created pools and a cache control rule to determine how the BIG-IP Cache Controller will distribute outbound traffic, you need to create a wildcard virtual server to process traffic using this rule and these pools.

To create a wildcard virtual server using the Configuration utility

  1. In the navigation pane, click Virtual Servers.
  2. On the toolbar, click Add Virtual Server.
    The Add Virtual Server screen opens.
  3. In the Add Virtual Server screen, configure the attributes you want to use with the virtual server.
    For additional information about configuring a virtual server, click the Help button.

    Configuration notes

    · Add a virtual server with address 0.0.0.0 and port 0 (this designates a wildcard virtual server).

    · Add the rule cache_rule.

To create a wildcard virtual server from the command line

Use the bigpipe vip command to configure the virtual server to use the pool that contains the outside addresses of the firewalls:

bigpipe vip 0.0.0.0:0 <interface> use rule <rule name>

In the command, replace the parameters with the appropriate information:

  • <interface> is the interface on the BIG-IP on which you want to create this virtual server.
  • <rule name> is the name of the rule you want this virtual server to use.

    To implement the configuration shown in Figure 4.1, you would use the command:

    bigpipe vip 0.0.0.0:0 use rule cache_rule