Applies To:

Show Versions Show Versions

Manual Chapter: Overview of the WebAccelerator System
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Acceleration, deployed asymmetrically, can significantly improve transaction response times. It includes specific techniques that modify or optimize the way in which TCP and other protocols, applications, and data flows can function across the network. Acceleration features enable you to refine transaction response times, according to your specific needs.
Asymmetric compression condenses web data for transmission to a browser.
Replaces previously sent data with dictionary pointers to minimize transmitted data and improve response time. Also ensures that the data is current and delivered only to authorized users.
Asymmetric optimization aggregates requests for any TCP protocol to reduce connection processing. It optimizes TCP processing for TCP/IP stacks that increase client-side connections to speed web page downloads.
Manipulates HTTP responses to increase browser caching and decrease HTTP requests.
Reduces client response times by serving web objects directly from a remote device, rather than from a central server.
Reduces client response times by serving files directly from a remote device, rather than from a central server.
HTTP protocol and web application optimization
Manipulates web requests and responses to increase HTTP and web application efficiency.
A virtual IP (VIP) address can be configured on a BIG-IP® Local Traffic Manager, which then acts as a proxy to the origin web servers in the pool. The BIG-IP Local Traffic Manager directs a client request made to the VIP to a server among a pool of servers, all of which are configured to respond to requests made to that address. A server pool improves the response time by reducing the load on each server and, consequently, the time required to process a request.
BIG-IP® Global Traffic Manager improves application performance by load balancing client requests to the nearest or best-performing data center, thus ensuring the fastest possible response. It uses an intelligent DNS resolver to monitor data centers and to route connection requests to the best-performing site based on different factors. One option might be to send all requests to one site if another site is down. A second option might be to send a request to the data center that has the fastest response time. A third option might be to send a request to the data center that is located closest to the clients source address.
Compression of HTTP and HTTPS traffic removes repetitive data and reduces the amount of data transmitted. Compression provided by the Local Traffic Manager offloads the compression overhead from origin web servers and enables the WebAccelerator system to perform other optimizations that improve performance for an HTTP or HTTPS stream.
Data deduplication requires the symmetric acceleration provided by the BIG-IP® WAN Optimization Module. A client-side device sends a request to a server-side device, and the server-side device responds to the client object request by sending new data and a dictionary entry or pointer that refers to the data to the client-side device. The client-side device stores the data and the pointer before sending it on to the requesting client. When a user requests the data a second or subsequent time from the client-side device, the server-side device checks for changes to the data, and then sends one or more pointers and any new data that has not been previously sent.
The Local Traffic Manager decreases the number of server-side TCP connections required while increasing the number of simultaneous client-side TCP connections available to a browser for downloading a web page.
Decreasing the number of server-side TCP connections can improve application performance and reduce the number of servers required to host an application. Creating and closing a TCP connection requires significant overhead, so as the number of open server connections increases, maintaining those connections while simultaneously opening new connections can severely degrade server performance and user response time.
Despite the ability for multiple transactions to occur within a single TCP connection, a connection is typically between one client and one server. A connection normally closes either when a server reaches a defined transaction limit or when a client has transferred all of the files that are needed from that server. The Local Traffic Manager system, however, operates as a proxy and can pool TCP server-side connections by combining many separate transactions, potentially from multiple users, through fewer TCP connections. The Local Traffic Manager system opens new server-side connections only when necessary, thus reusing existing connections for requests from other users whenever possible.
Increasing the number of client-side TCP connections reduces the time to load a web page because a server typically terminates a connection after reaching a predefined transaction limit, requiring the browser to open another connection to that URL.
Caching provides storage of data within close proximity of the user and permits reuse of that data during subsequent requests.
In one form of caching, a WebAccelerator system instructs a client browser to cache an object, marked as static, for a specified period. During this period, the browser reads the object from cache when building a web page until the content expires, whereupon the client reloads the content. This form of caching enables the browser to use its own cache instead of expending time and bandwidth by accessing data from a central site.
In a second form of caching, a WebAccelerator system in a data center manages requests for web application content from origin web servers. Operating asymmetrically, the WebAccelerator system caches objects from origin web servers and delivers them directly to clients. The WebAccelerator system handles both static content and dynamic content, by processing HTTP responses, including objects referenced in the response, and then sending the included objects as a single object to the browser. This form of caching reduces server TCP and application processing, improves web page loading time, and reduces the need to regularly expand the number of web servers required to service an application.
In a third form of caching, a WebAccelerator system at a remote site caches and serves content to users. The WebAccelerator system serves content locally whenever possible, thus reducing the response time and use of the network.
HTTP protocol optimization achieves a high user performance level by optimally modifying each HTTP session. Some web applications, for example, cannot return an HTTP 304 status code (Not Modified) in response to a client request, consequently sending an entire, identical object repeatedly. Because the WebAccelerator system proxies connections and caches content, when a requested object is unchanged, the WebAccelerator system returns an HTTP 304 response instead of returning the unchanged object, thus enabling the browser to load the content from its own cache even when a web application hard codes a request to resend the object.
The WebAccelerator system improves the performance of Web applications by modifying server responses, which includes marking an object as cacheable with a realistic expiration date. This optimization is especially beneficial when using off-the-shelf or custom applications that impede or prevent changes to code.
The BIG-IP® WebAccelerator system is a delivery solution designed to improve the speed at which users access your web applications (such as Microsoft® SharePoint, Microsoft® Outlook Web Access, BEA AquaLogic®, SAP® Portal, Oracle® Siebel CRM, Oracle® Portal, and others) and wide area network (WAN).
The WebAccelerator system does this through acceleration policy features that modify web browser behavior, as well as compresses and caches dynamic and static content, which decreases bandwidth usage and ensures that your users get the most quick and efficient access to your web applications and WAN.
The BIG-IP WebAccelerator system is one of several products that constitute the BIG-IP product family. All BIG-IP products run on the Traffic Management Operating System, commonly referred to as TMOS®.
To accelerate access to your applications, the WebAccelerator system uses acceleration policies to manipulate HTTP responses from origin web servers. After the WebAccelerator system manipulates the HTTP responses, it processes the responses. Therefore, the WebAccelerator system processes manipulated responses, rather than the original responses that are sent by the origin web servers.
In addition to the using the acceleration policy features, you can easily monitor your HTTP traffic and system processes through monitoring tools.
An asymmetric deployment consists of one or more WebAccelerator systems installed on one end of a WAN, and in the same location as the origin web servers that are running the applications to which the WebAccelerator system is accelerating client access.
Figure 2.1 illustrates an asymmetric deployment with a single WebAccelerator system on one end of a WAN.
Most sites are built on a collection of web servers, application servers, and database servers that we refer to collectively as origin web servers. The BIG-IP® WebAccelerator system is installed on your network between the users of your applications and the origin web servers on which the applications run, and accelerates your applications response to HTTP requests.
Origin web servers can serve all possible permutations of content, while the WebAccelerator system only stores and serves page content that clients have previously requested from your site. By transparently servicing the bulk of common requests, the WebAccelerator system significantly reduces the load on your origin web servers, which improves performance for your site.
Once installed, the WebAccelerator system receives all requests destined for the origin web server. When a client makes an initial request for a specific object, the WebAccelerator system relays the request to the origin web server, and caches the response that it receives in accordance with the policy, before forwarding the response to the client. The next time a client requests the same object, the WebAccelerator system serves the response from cache, based on lifetime settings within the policy, instead of sending the request to the origin web servers.
Services the request from its cache
Upon receiving a request from a browser or web client, the WebAccelerator system initially checks to see if it can service the request from compiled responses in the systems cache.
Sends the request to the origin web servers
If the WebAccelerator system is unable to service the request from the systems cache, it sends a request to the origin web server. Once it receives a response from the origin web server, the WebAccelerator system caches that response according to the associated acceleration policy rules, and then forwards the request to the client.
Relays the request to the origin web servers
The WebAccelerator system relays requests directly to the origin web server, for some predefined types of content, such as requests for streaming video.
Creates a tunnel to send the request to the origin web servers
For any encrypted traffic (HTTPS) content that you do not want the WebAccelerator system to process, you can use tunneling. Note that the WebAccelerator system can cache and respond to SSL traffic without using tunnels.
During the process of application matching, the WebAccelerator uses the hostname in the HTTP request to match the request to an application profile that you created. Once matched to an application profile, the WebAccelerator system applies the associated acceleration policys matching rules in order to group the request and response to a specific leaf node on the Policy Tree. The WebAccelerator system, then applies the acceleration policys acceleration rules to each group. These acceleration rules dictate how the WebAccelerator system manages the request.
The first time that a WebAccelerator system receives new content from the origin web server in response to an HTTP request, it processes the information as follows, before returning the requested object (response) to the client:
Compiles an internal representation of the object
The WebAccelerator system uses compiled responses received from the origin web server, to assemble an object in response to an HTTP request.
Assigns a Unique Content Identifier (UCI) to the compiled response, based on elements present in the request
The origin web server generates specific responses based on certain elements in the request, such as the URI and query parameters. The WebAccelerator system includes these elements in a UCI that it creates, so that it can easily match future requests to the correct content in its cache. The WebAccelerator system matches content to the UCI for both the request and the compiled response that it created to service the request.
1.
Clients, using web browsers, request pages from your site. From the clients perspective, they are connecting directly to your site; they have no knowledge of the WebAccelerator system.
2.
The WebAccelerator system examines the clients request to determine if it meets all the HTTP requirements needed to service the request.
If the request does not meet the HTTP requirements, the WebAccelerator system issues an error to the client.
3.
The WebAccelerator system examines the request elements and creates a UCI, and then reviews the systems cache to see if it has a compiled response stored under that same UCI.
If the content is being requested for the first time (there is no matching compiled response in the systems cache), the WebAccelerator system uses the host map to relay the request to the appropriate origin web server to get the required content.
If content with the same UCI is already stored as a compiled response in the systems cache, the WebAccelerator system checks to see if the content has expired. If the content has expired, the WebAccelerator system checks to see if the information in the systems cache still matches the origin web server. If it does, the WebAccelerator system moves directly to step 7. Otherwise, it performs the following step.
6.
The origin web server replies to the WebAccelerator system with the requested material, and the WebAccelerator system compiles the response. If the response meets the appropriate requirements, the WebAccelerator system stores the compiled response in the systems cache under the appropriate UCI.
7.
The WebAccelerator system uses the compiled response, and any associated assembly rule parameters, to recreate the page. The assembly rule parameters dictate how to update the page with generated content.
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)