Original Publication Date: 01/19/2005
Updated Date: 09/13/2016
This article applies to BIG-IP 9.x. For information about other versions, refer to the following article:
Beginning in BIG-IP 9.0.0, the Traffic Management Microkernel (TMM) processes all load-balanced traffic. TMM runs as a real-time user process within the BIG-IP operating system (TMOS). Prior BIG-IP versions handled traffic processing in the kernel.
The following factors influence the manner in which TMM uses the CPU:
CPU utilization on single CPU, single core systems
BIG-IP 9.0.0 through 9.3.1
In BIG-IP 9.0.0 through 9.3.1, TMM consumes all available CPU time on a single CPU, single core system. When TMM is not actively processing traffic, it will yield idle CPU cycles up to 99 percent of the CPU time to the host as required for other processes. When TMM is actively processing traffic, it will release only up to 20 percent of the CPU time as required for other processes.
As a result, it is normal to see TMM at or near 100 percent CPU utilization on both idle systems and those processing large volumes of traffic. However, if the system is heavily loaded with other host-based tasks, such as a large number of application health monitors, TMM CPU utilization may fall back to 80 percent when processing large volumes of traffic, or to 1 percent when idle.
Note: In 9.0.0 through 9.3.1, the top utility does not correctly report TMM's CPU utilization. F5 recommends that you use the bigpipe global command to determine how hard TMM is working.
BIG-IP 9.4.2 through 9.4.8
Beginning in BIG-IP 9.4.2, TMM no longer consumes all available CPU cycles when idle, and TMM is limited to 90 percent of the CPU cycles, regardless of load. When TMM is idle or processing low volumes of traffic, TMM yields idle cycles to the host, and utilities such as top display a commensurately low percentage of CPU utilization. These changes were introduced in CR75437 as an optimization of the TMM internal polling and scheduling mechanisms.
CPU utilization on systems with multiple processing units (multi-CPU and/or multi-core systems)
BIG-IP 9.0.0 through 9.3.1
On a system with two or more processing units, the highest numbered CPU will be dedicated to TMM. The top utility will show TMM using 100 percent of the CPU dedicated to TMM, while the remaining CPUs not used by TMM will be used at various percentages by other processes.
BIG-IP 9.4.0 through 9.4.1
Beginning in BIG-IP 9.4.0, the Clustered Multiprocessing (CMP) feature was introduced for multi-processor BIG-IP platforms. CPU utilization on multi-processor platforms that are CMP-capable will behave similarly to BIG-IP 9.0.0 through 9.3.1, except that the BIG-IP system will launch a separate TMM process for each CPU, and will only yield up to 10 percent of processor time to other processes. As a result, when CMP is enabled, TMM will consume at least 90 percent of the CPU time on all processors, and may consume up to 100 percent when processing traffic.
Although they are multi-processor platforms, BIG-IP 6400 and 6800 do not support CMP in these versions. They run only one TMM instance and process traffic as previously noted for BIG-IP 9.0.0 through 9.3.1.
Note: For more information about the CMP feature, refer to SOL7751: Overview of Clustered Multiprocessing (9.x - 10.x).
BIG-IP 9.4.2 through 9.4.8
On CMP-capable multi-processor platforms (BIG-IP 1600, 3600, 3900, 6900, 8400, 8800, and 8900), in BIG-IP 9.4.2 through 9.4.8, the BIG-IP system will launch a separate TMM process for each processing unit. TMM no longer consumes all available CPU cycles when idle, and TMM is limited to 90 percent of the CPU cycles, regardless of load. When TMM is idle or processing low volumes of traffic, utilities such as top will display a commensurately low percentage of CPU utilization for each processing unit and TMM instance. These changes were introduced in CR75437 as an optimization of the TMM internal polling and scheduling mechanisms.
Multi-processor platforms that do not support CMP in these versions (BIG-IP 6400 and 6800) will only run one TMM instance and process traffic as noted, previously, for BIG-IP 9.0.0 through 9.3.1.
BIG-IP 9.4.0 through 9.4.8 licensed for ASM or WebAccelerator
When you enable the BIG-IP ASM or BIG-IP WebAccelerator module on a multi-processor system, CMP is automatically disabled globally. The TMM process is assigned exclusively to the highest numbered CPU, and the remaining processors are reserved for use by the host operating system and the BIG-IP ASM or BIG-IP WebAccelerator processes. This behavior is apparent when using the top command; TMM is observed to be running on one CPU, and that CPU displays 100 percent usage. The Configuration utility displays the correct CPU and TMM usage for each core.
To re-enable CMP on systems that have the BIG-IP WebAccelerator or BIG-IP ASM module installed, you must disable the module on the System > License > Modules tab, or by setting the Module.ASM or Module.WA db variables to disable.
Memory reservation and allocation
In BIG-IP 9.0.0 through 9.4.6, the BIG-IP system reserves memory for TMM. The reserved memory is not available to the operating system for general use. As a result, the amount of memory that traditional UNIX utilities report will be incorrect.
Note: For more information, refer to SOL10099: The bigpipe memory show command now displays the total physical memory.