Applies To:

Show Versions Show Versions

Manual Chapter: Understanding Probes
Manual Chapter
Table of Contents   |   << Previous Chapter

When you install a Global Traffic Manager in a network, that system typically works within a larger group of BIG-IP® products. These products include other Global Traffic Managers, Link Controllers, and Local Traffic Managers. The Global Traffic Manager must be able to communicate with these other systems to maintain an accurate assessment of the health and availability of different network components. For example, the Global Traffic Manager must be able to acquire statistical data from resources that are managed by a Local Traffic Manager in a different data center. BIG-IP systems acquire this information through the use of probes. A probe is an action a BIG-IP system takes to acquire data from other network resources.
Probes are an essential means by which the Global Traffic Manager tracks the health and availability of network resources; however, it is equally important that the responsibility for conducting probes be distributed across as many BIG-IP products as possible. This distribution ensures that no one system becomes overloaded with conducting probes, which would cause a decrease in performance in the other tasks for which a BIG-IP system is responsible.
Note: If you are familiar with the precursor to the Global Traffic Manager, the 3-DNS Controller, you are likely already familiar with probes. With 3-DNS Controllers, a single system, the principal system, was responsible for managing all of the probe requests. With the introduction of the Global Traffic Manager, these requests are distributed more efficiently across other BIG-IP Global Traffic Manager systems.
To distribute probe requests effectively across multiple BIG-IP systems, Global Traffic Managers employ several different technologies and methodologies, including:
iQuery, which is the communication protocol used between Global Traffic Managers and the big3d agents that reside on other BIG-IP systems
A selection methodology that determines which big3d agent actually conducts the probe
One of the important concepts to remember when understanding how the Global Traffic Manager acquires network data is that the process consists of several tasks:
The big3d agent conducts the probe.
The big3d agent broadcasts the results of the probe, allowing all Global Traffic Managers to receive the information.
At the heart of probe management with Global Traffic Manager systems is iQuery, the communications protocol that these systems use to send information from one system to another. With iQuery, Global Traffic Managers in the same synchronization group can share configuration settings, assign probe requests to big3d agents, and receive data on the status of network resources.
The iQuery protocol is an XML protocol that is sent between each system using gzip compression and SSL. These communications can only be allowed between systems that have a trusted relationship established, which is why configuration tools such as big3d_install, bigip_add, and gtm_add are critical when installing or updating Global Traffic Managers. If two systems have not exchanged their SSL certificates, they cannot share information with each other using iQuery.
In addition to requiring trusted relationships, iQuery communications only occur across the same VLAN; in other words, if two systems reside on different VLANs, they cannot communicate through iQuery. Also, iQuery communications occur only within the same synchronization group. If your network consists of two synchronization groups, with each group sharing a subset of network resources, these groups both probe the network resources and communicate with iQuery separately.
Generally, iQuery communications occur behind-the-scenes; however, on occasion it can be necessary to view the data transmitted between each system. For example, you might be troubleshooting the reason that a Global Traffic Manager is exhibiting a particular behavior. In such a situation, you can use the command, iqdump.
Type iqdump <ip address> <synchronization group name>.
The IP address that you type must be the IP address with which the system is communicating with iQuery. This IP address can be either the local system or a remote system.
Press Enter.
Immediately, information the BIG-IP system has received through iQuery appears in the command window. Note that the data displayed represents only the information the system receives; it does not display the information the system has sent through iQuery.
Note: One of the first pieces of information displayed when running iQuery is the version of the remote big3d agent. This is an excellent way of determining if a system is running the latest version of the big3d agent.
When you assign a monitor to a network resource through the Configuration utility of the Global Traffic Manager, the first action is for a Global Traffic Manager to be responsible for ensuring that a big3d agent probes the selected resource. It is important to remember that this does not necessarily mean the selected Global Traffic Manager actually conducts the probe; it means only that a specific Global Traffic Manager is in charge of assigning a big3d agent to probe the resource. The big3d agent could be installed on the same system as the Global Traffic Manager, a different Global Traffic Manager, or the big3d agent on another BIG-IP system.
A crucial component to determining which system manages a probe request is the data centers that you defined in the Global Traffic Manager configuration. For each probe, the Global Traffic Manager systems determine the following:
By default, Global Traffic Manager systems delegate probe management to a system that belongs to the same data center as the resource, since the close proximity of system and resource improves probe response time.
To illustrate how these considerations factor into probe management, consider a fictional company, SiteRequest. This company has three data centers: one in Los Angeles, one in New York, and one in London. The following table lists a few characteristics of each data center.
Now, consider that you want to acquire statistical data from a resource in the New York data center. First, the Global Traffic Manager systems, based on their iQuery communications with each other, identify whether there is a Global Traffic Manager that belongs to the New York data center. In this case, the answer is yes; the New York data center contains a Global Traffic Manager. Next, the systems determine if more than one Global Traffic Manager belongs to the New York data center. In this case, the answer is no; the New York data center has only a stand-alone system. Consequently, the Global Traffic Manager in the New York data center assumes responsibility for conducting the probe on this particular resource.
In situations where more than one Global Traffic Manager belongs to a data center, the systems use an algorithm to distribute the responsibility for probes equally among Global Traffic Manager systems. This distribution ensures that each Global Traffic Manager system has an equal chance of being responsible for managing a probe request.
To demonstrate how probe requests are delegated between two Global Traffic Manager systems at the same data center, consider again the network configuration at SiteRequest. This time, the company needs to acquire data from a resource that resides at the Los Angeles data center. As with the previous example, the first step identifies whether the Los Angeles data center has any Global Traffic Managers; in this case, the answer is yes. The next criteria is whether there is more than one Global Traffic Manager at that data center; in this case, the answer is also yes: the Los Angeles data center has a redundant system that consists of two Global Traffic Managers. Because there are two Global Traffic Managers at this data center, each system compares the hash value of the resource with its own information; whichever Global Traffic Manager has the closest value to the resource becomes responsible for managing the probe request.
A final consideration is if a data center does not have any Global Traffic Managers at all, such as the London data center in the configuration for SiteRequest. In these situations, the responsibility for probing a resource at that data center is divided among the other Global Traffic Managers; much in the same way as the responsibility is divided among Global Traffic Managers within the same data center.
Once a Global Traffic Manager becomes responsible for managing a probe, it remains responsible for that probe until the network configuration changes in one of the following ways:
As described in Determining probe responsibility, the first stage in conducting a probe of a network resource is to select the Global Traffic Manager. In turn, the Global Traffic Manager delegates the probe to a big3d agent, which is responsible for querying the given network resource for data.
One way in which you can consider the probe delegation process of network resources is in the similar to the two-tiered load balancing method the Global Traffic Manager uses when delegating traffic. With DNS traffic, the Global Traffic Manager identifies the wide IP to which the traffic belongs. Then, it load balances that traffic among the pools associated with the wide IP. One it selects a pool, the system load balances the request across the pool members within that pool.
Delegating probe requests occurs in a similar two-tiered fashion. First, the Global Traffic Managers within a synchronization group determine which system is responsible for managing the probe. This does not necessarily mean that the selected Global Traffic Manager conducts the probe itself; it means only that a specific Global Traffic Manager ensures that the probe takes place. Next, the Global Traffic Manager selects one of the available big3d agents to actually conduct the probe. As each BIG-IP system has a big3d agent, the number of agents available to conduct the probe depends on the number of BIG-IP systems.
To illustrate how these considerations factor into probe management, consider a fictional company, SiteRequest, that was used in Determining probe responsibility. This company has three data centers: one in Los Angeles, one in New York, and one in London. The following table lists a few characteristics of each data center:
Now, consider that a Global Traffic Manager in the Los Angeles data center has assumed responsibility for managing a probe for a network resource. At this data center, the system can assign the probe to one of four big3d agents: one for each BIG-IP system at the data center. To select a big3d, the Global Traffic Manager looks to see which big3d agent has the fewest number of probes for which it is responsible. The big3d agent with the lowest number of probes is tasked with conducting the probe. The Global Traffic Manager checks this statistic each time the it needs to delegate the probe; as a result, the big3d select could change from probe instance to probe instance.
In situations where a big3d agent does not reside in the same data center as the resource, the designated Global Traffic Manager selects a big3d from all available big3d agents on the network. Again, the agent selected is the agent with the fewest number of probe requests, and this check occurs each time the probe is conducted.
For example, SiteRequest adds a new set of web servers in Tokyo. At this location, the company has yet to install its BIG-IP systems; however, the current set of Global Traffic Managers in Los Angeles and New York are managing traffic to these web servers. When initiating a probe request to determine the availability of one of these servers, a Global Traffic Manager is selected to manage the probe request. Then, that system chooses a big3d agent to probe the web server, selecting any big3d agent located in Los Angeles, New York, or London.
In most cases, the probes sent to internal network resources are handled through a distributed load balancing system that first selects a Global Traffic Manager, and then selects a big3d agent. However, in some circumstances you might want to assign a specific server to conduct a probe of a given resource. For those situations, you can use the Statistics Collection Server setting. This option is only available for non-BIG-IP systems.
On the Main tab of the navigation pane, expand Global Traffic and then click Servers.
Click the Create button.
Alternatively, you can select an existing server by clicking the appropriate server entry from the main Servers screen.
From the Configuration list, select Advanced.
From the Statistics Collection Server list, select a BIG-IP system that you want to use to conduct probes for this server.
Click the Finished button to save your changes.
The Global Traffic Manager uses the specified BIG-IP system to conduct probes on this server unless that system becomes unavailable.
One of the probes for which Global Traffic Manager systems are responsible are probes of Local Domain Name Systems, or LDNS servers. Unlike probes conducted on internal systems, such as web servers, probes of LDNS servers require that the Global Traffic Manager verify data from a resource that exists outside the network. Typically, this data is the path information the Global Traffic Manager requires when conducting Quality of Service, Round Trip Time, Completion Rate, and Hops load balancing methods.
Note: If you do not use Quality of Service load balancing, the Global Traffic Manager does not conduct probes of LDNS servers.
When a given LDNS server makes a DNS request for a wide IP, that request is sent to a single Global Traffic Manager. The Global Traffic Manager then creates an LDNS server entry, and assigns that entry one of the following states:
New: the Global Traffic Manager has not come across this particular LDNS server before
Active: the Global Traffic Manager already has an existing entry for this LDNS server
Pending: the Global Traffic Manager has been contacted by this LDNS server before, however, this server has yet to respond to a probe from a Global Traffic Manager on this network
In general, the New and Pending states are temporary states; an LDNS server remains in one of these states only until it responds to the first probe request from a Global Traffic Manager. Once the Global Traffic Manager receives a response, the LDNS entry is moved to the Active state. Each Global Traffic Manager within a given synchronization group shares the LDNS entries that are assigned this state, resulting in the synchronization group having a common list of known LDNS servers.
Unlike internal probes, LDNS probes are not load balanced across Global Traffic Managers. Instead, the Global Traffic Manager that the LDNS server first queries becomes responsible for the initial probe to that LDNS. These probes are load balanced, however, across the multiple big3d agents, with preference given to big3d agents that either belong to the same data center as the responding Global Traffic Manager, or belong to the same link through which the Global Traffic Manager received the LDNS query. After the initial probe, an algorithm is used to load balance subsequent probes across the available Global Traffic Manager systems.
The Global Traffic Manager that responds to the request determines if it already has an entry for the LDNS server. If it does not, it creates an entry with a status of New.
The Global Traffic Manager delegates the probe of the LDNS server to a big3d agent; preferably a big3d agent that resides in the same data center as the Global Traffic Manager.
When the LDNS server responds to the probe, it sends its information to the Global Traffic Manager.
The Global Traffic Manager synchronizes its list of active LDNS servers with the other members of its synchronization group.
Again, if you do not use Quality of Service load balancing modes, the Global Traffic Managers do not conduct LDNS server probe.
Table of Contents   |   << Previous Chapter

Was this resource helpful in solving your issue?

NOTE: Please do not provide personal information.

Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)