Applies To:

Show Versions Show Versions

Manual Chapter: Nodes
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

A node is a logical object on the BIG-IP® Local Traffic ManagerTM system that identifies the IP address of a physical resource on the network. You can explicitly create a node, or you can instruct Local Traffic Manager to automatically create one when you add a pool member to a load balancing pool.
The difference between a node and a pool member is that a node is designated by the devices IP address only (10.10.10.10), while designation of a pool member includes an IP address and a service (such as 10.10.10:80).
A primary feature of nodes is their association with health monitors. Like pool members, nodes can be associated with health monitors as a way to determine server status. However, a health monitor for a pool member reports the status of a service running on the device, whereas a health monitor associated with a node reports status of the device itself.
For example, if an ICMP health monitor is associated with node 10.10.10.10, which corresponds to pool member 10.10.10.10:80, and the monitor reports the node as being in a down state, then the monitor also reports the pool member as being down. Conversely, if the monitor reports the node as being in an up state, then the monitor reports the pool member as being either up or down, depending on the status of the service running on it.
Nodes are the basis for creating a load balancing pool. For any server that you want to be part of a load balancing pool, you must first create a node, that is, designate that server as a node. After designating the server as node, you can add the node to a pool as a pool member. You can also associate a health monitor with the node, to report the status of that server.
To configure and manage nodes, log in to the BIG-IP Configuration utility, and on the Main tab, expand Local Traffic, and click Nodes.
For each node that you define, you must specify an IP address. An example of a node IP address is 10.10.10.10. You can also give the node a unique node name, such as Node_1.
Table 3.1 lists these configurable settings and their default values. Following this table are more detailed descriptions of specific settings.
Defines whether the BIG-IP system should associate the default monitor with the node, or whether you want to specifically assign a monitor to the node.
Specifies the monitors that the BIG-IP system is to associate with the node. This setting is only available when you set the Health Monitors setting to Node Specific.
Specifies the minimum number of health monitors that must report a node as being available to receive traffic before the BIG-IP system reports that node as being in an up state. This setting is only available when you set the Health Monitors setting to Node Specific.
Specifies the maximum number of concurrent connections allowed on a node.
Specifies the maximum rate of new connections allowed for the node, per second. A value of 0 indicates that there is no limit to the number of new connections allowed per second.
This setting specifies the IP address of the node. If you are using a route domain other than route domain 0, you can append a route domain ID to this node address. For example, if the node address applies to route domain 1, then you can specify a node address of 10.10.10.10.:%1.
Using Local Traffic Manager, you can monitor the health or performance of your nodes by associating monitors with those nodes. This is similar to associating a monitor with a load balancing pool, except that in the case of nodes, you are monitoring the IP address, whereas with pools, you are monitoring the services that are active on the pool members.
Local Traffic Manager contains many different pre-configured monitors that you can associate with nodes, depending on the type of traffic you want to monitor. You can also create your own custom monitors and associate them with nodes. The only pre-configured monitors that are not available for associating with nodes are monitors that are specifically designed to monitor pools or pool members rather than nodes.
Note: Any monitor that you associate with a node must reside either in partition Common or in the partition that contains the node.
There are two ways that you can associate a monitor with a node: by assigning the same monitor (that is, a default monitor) to multiple nodes at the same time, or by explicitly associating a monitor with each node as you create it.
If you create a pool member without first creating the parent node, Local Traffic Manager automatically creates the parent node for you. Fortunately, you can configure Local Traffic Manager to automatically associate one or more monitor types with every node that Local Traffic Manager creates. This eliminates the task of having to explicitly choose monitors for each node.
If a user with permission to manage objects in partition Common disables a monitor that is designated as the default monitor for nodes (such as the icmp monitor), this affects all nodes on the system. Ensure that the default monitor for nodes always resides in partition Common.
To specify default monitors, you must have the Administrator user role assigned to your user account.
If all nodes reside in the same partition, the default monitor must reside in that partition or in partition Common. If nodes reside in separate partitions, then the default monitor must reside in partition Common.
Sometimes, you might want to explicitly create a node, rather than having Local Traffic Manager create the node automatically. In this case, when you create the node, you can either:
You can remove a monitor that is explicitly associated with a specific node. When removing a monitor associated with a specific node, you can either remove the monitor association altogether, or change it so that only the default monitor is associated with the node.
Alternatively, you can remove any default monitors, that is, monitors that Local Traffic Manager automatically associates with any node that you create.
You can specify the minimum number of health monitors that must report a node as being available to receive traffic before Local Traffic Manager reports that node as being in an up state.
When you are using the Ratio load balancing method, you can assign a ratio weight to each node in a pool. LTM® uses this ratio weight to determine the correct node for load balancing.
Note that at least one node in the pool must have a ratio value greater than 1. Otherwise, the effect equals that of the Round Robin load balancing method.
The maximum rate of new connections allowed for the node. When you specify a connection rate limit, the system controls the number of allowed new connections per second, thus providing a manageable increase in connections without compromising availability. The default value of 0 specifies that there is no limit on the number of connections allowed per second.
A node must be enabled in order to accept traffic. When a node is disabled, Local Traffic Manager allows existing connections to time out or end normally. In this case, the node can accept new connections only if the connections belong to an existing persistence session. (In this way a disabled node differs from a node that is set to down. The down node allows existing connections to time out, but accepts no new connections whatsoever.)
At any time, you can determine the status of a node, using the Configuration utility. You can find this information by displaying the list of nodes and viewing the Status column, or by viewing the Availability property of a node.
The shape of the icon indicates the status that the monitor has reported for that node.
The color of the icon indicates the actual status of the node.
Tip: You can manually set the availability of a node by configuring the Manual Resume attribute of the associated health monitor.
Status indicator
The node is enabled but is currently unavailable. However, the node might become available later, with no user action required. An example of an unavailable node becoming available automatically is when the number of concurrent connections to the node no longer exceeds the value defined in the nodes Connection Limit setting.
The node is enabled but offline because an associated monitor has marked the node as down. To change the status so that the node can receive traffic, user intervention is required.
The node is set to Disabled, although a monitor has marked the node as up. To resume normal operation, you must manually enable the node.
The node is set to Disabled and is down. To resume normal operation, you must manually enable the node
The node is set to Disabled and is offline either because a user disabled it, or a monitor has marked the node as down. To resume normal operation, you must manually enable the node.
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)