Manual Chapter : Preventing DoS Attacks on Applications

Applies To:

Show Versions Show Versions

BIG-IP ASM

  • 11.6.5, 11.6.4, 11.6.3, 11.6.2, 11.6.1
Manual Chapter

Preventing DoS Attacks on Applications

What is a DoS attack?

A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) makes a victim's resource unavailable to its intended users, or obstructs the communication media between the intended users and the victimized site so that they can no longer communicate adequately. Perpetrators of DoS attacks typically target sites or services, such as banks, credit card payment gateways, and e-commerce web sites.

Application Security Manager™ (ASM) helps protect web applications from DoS attacks aimed at the resources that are used for serving the application: the web server, web framework, and the application logic. Advanced Firewall Manager™ (AFM) helps prevent network, SIP, and DNS DoS and DDoS attacks.

HTTP-GET attacks and page flood attacks are typical examples of application DoS attacks. These attacks are initiated either from a single user (single IP address) or from thousands of computers (distributed DoS attack), which overwhelms the target system. In page flood attacks, the attacker downloads all the resources on the page (images, scripts, and so on) while an HTTP-GET flood repeatedly requests specific URLs regardless of their place in the application.

About recognizing DoS attacks

Application Security Manager™ determines that traffic is a DoS attack based on calculations for transaction rates on the client side (TPS-based) or latency on the server side (latency-based). You can specify the calculations that you want the system to use.

Note: You can set up both methods of detection to work independently or you can set them up to work concurrently to detect attacks on both the client side and server side. Whichever method detects the attack handles DoS protection.

In addition, the system can protect web applications against DoS attacks on heavy URLs. Heavy URL protection implies that during a DoS attack, the system protects the heavy URLs using the methods configured in the DoS profile.

You can view details about DoS attacks that the system detected and logged in the event logs and DoS reports. You can also configure remote logging support for DoS attacks when creating a logging profile.

When to use different DoS protections

Application Security Manager provides several different types of DoS protections that you can configure to protect applications. The following table describes when it is most advantageous to use the different protections. You can use any combination of the protections.

DoS Protection When to Use
TPS-based protection To focus protection on the client side to detect an attack right away.
Latency-based protection To focus protection on the server side where attacks are detected when a server slowdown occurs.
Heavy URLs If application users can query a database or submit complex queries that may slow the system down.
Proactive bot defense To stop DoS attacks before they compromise the system. Affords great protection but impacts performance.
CAPTCHA challenge To stop non-human attackers by presenting a character recognition challenge to suspicious users.

About configuring TPS-based DoS protection

When setting up DoS protection, you can configure the system to prevent DoS attacks based on transaction rates (TPS-based anomaly detection). If you choose TPS-based anomaly protection, the system detects DoS attacks from the client side using the following calculations:

Transaction rate detection interval
A short-term average of recent requests per second (for a specific URL or from an IP address) that is updated every 10 seconds.
Note: The averages for IP address and URL counts are done for each site, that is, for each virtual server and associated DoS profile. If one virtual server has multiple DoS profiles (implemented using a local traffic policy), then each DoS profile has its own statistics within the context of the virtual server.
Transaction rate history interval
A longer-term average of requests per second (for a specific URL or from an IP address) calculated for the past hour and is updated every 10 seconds.

If the ratio of the transaction rate detection interval to the transaction rate during the history interval is greater than the percentage indicated in the TPS increased by setting, the system considers the web site to be under attack, or the URL, IP address, or geolocation to be suspicious. In addition, if the transaction rate detection interval is greater than the TPS reached setting (regardless of the history interval), then also the respective URL, IP address, or geolocation is suspicious or the site is being attacked.

Note that TPS-based protection might detect a DoS attack simply because many users are trying to access the server all at once, such as during a busy time or when a new product comes out. In this case, the attack might be a false positive because the users are legitimate. But the advantage of TPS-based DoS protection is that attacks can be detected earlier than when using latency-based protection. So it is important to understand the typical maximum peak loads on your system when setting up DoS protection, and use the methods that are best for your application.

About configuring latency-based DoS protection

When setting up DoS protection, you can configure the system to prevent DoS attacks based on the server side (latency-based anomaly detection). In latency-based detection, it takes a latency increase and at least one suspicious IP address, URL, heavy URL, site-wide entry, or geolocation to consider the activity to be an attack.

Note: The average latency is measured for each site, that is, for each virtual server and associated DoS profile. If one virtual server has multiple DoS profiles (implemented using a local traffic policy), then each DoS profile has its own statistics within the context of the virtual server.

If the ratio of recent versus historical values is greater than the Latency increased by setting, then a prerequisite for the presence of an attack is satisfied, but that is not sufficient. It also takes at least one suspicious IP address or geolocation, one attacked URL based on TPS criteria, one heavy URL, or one site-wide entry for the system to declare an attack and start mitigation. In addition, if the transaction rate detection interval is greater than the Latency reached setting (regardless of the history interval), then also the respective IP address is suspicious or the URL is being attacked.

Latency-based protection is less prone to false positives than TPS-based protection because in a DoS attack, the server is reaching capacity and service/response time is slow: this is impacting all users. Increased latency can be used as a trigger step for detecting an L7 attack. Following the detection of a significant latency increase, it is important to determine whether you need further action. After examining the increase in the requests per second and by comparing these numbers with past activity, you can identify suspicious versus normal latency increases.

About DoS prevention policy

When setting up either transaction-based or latency-based DoS protection, you can specify a prevention policy that determines how the system recognizes and mitigates DoS attacks. The prevention policy can use the following methods:

  • JavaScript challenges (also called Client-Side Integrity Defense)
  • CAPTCHA challenges
  • Request blocking (including Rate Limiting or Block All)
The system can issue a JavaScript challenge to analyze whether the client is using a legal browser (that can respond to the challenge) when the system encounters a suspicious IP address, URL, geolocation, or site-wide criteria. If the client does execute JavaScript in response to the challenge, the system purposely slows down the interaction (by posing a complex computation in JavaScript that typically takes around 2 seconds on most client platforms). The Client Side Integrity Defense mitigations are only enacted when the operation mode of the anomaly is set to blocking.

Based on the same suspicious criteria, the system can also issue a CAPTCHA (character recognition) challenge to verify that the client is human. Depending on how strict you want to enforce DoS protection, you can limit the number of requests that are allowed through to the server or block requests that are deemed suspicious.

You can also use can use request blocking in the prevention policy to specify conditions for when the system blocks requests. Note that the system only blocks requests during a DoS attack when the TPS-based or latency-based anomaly’s Operation Mode is set to Blocking. You can use request blocking to rate limit or block all requests from suspicious IP addresses, suspicious countries, or URLs suspected of being under attack. Site-wide rate limiting also blocks requests to web sites suspected of being under attack. If you block all requests, the system blocks suspicious IP addresses and geolocations except those on the whitelist. If using rate limiting, the system blocks some requests depending on the threshold detection criteria set for the anomaly.

The mitigation methods that you select are used in the order they appear on the screen. The system enforces the methods only as needed if the previous method was not able to stem the attack.

About geolocation mitigation

You can mitigate DoS attacks based on geolocation by detecting traffic from countries sending suspicious traffic. This is part of the prevention policy in the DoS profile for latency-based and TPS-based anomalies and can be used for unusual activity as follows:

  • Geolocation-based Client Side integrity: If traffic from countries matches the thresholds configured in the DoS profile, the system considers those countries suspicious, and sends a JavaScript challenge to each suspicious country.
  • Geolocation-based CAPTCHA challenge: If traffic from countries matches the thresholds configured in the DoS profile, the system considers those countries suspicious, and issues a CAPTCHA challenge to each suspicious country.
  • Geolocation-based request dropping: The system drops all, or some, requests from suspicious countries.

In addition, you can add countries to a geolocation whitelist (traffic from these countries is never blocked) and a blacklist (traffic from these countries is always blocked when a DoS attack is detected).

About heavy URL protection

Heavy URLs are URLs that may consume considerable server resources per request. Heavy URLs respond with low latency most of the time, but can easily reach high latency under specific conditions. Heavy URLs are not necessarily heavy all the time, but tend to get heavy especially during attacks. Therefore, low rate requests to those URLs can cause significant DoS attacks and be hard to distinguish from legitimate clients.

Typically, heavy URLs involve complex database queries; for example, retrieving historical stock quotes. In most cases, users request recent quotes with weekly resolution, and those queries quickly yield responses. However, an attack might involve requesting five years of quotes with day-by-day resolution, which requires retrieval of large amounts of data, and consumes considerably more resources.

Application Security Manager™ (ASM) allows you to configure protection from heavy URLs in a DoS profile. You can specify a latency threshold for automatically detecting heavy URLs. If some of the web site's URLs could potentially become heavy URLs, you can add them so the system will keep an eye on them, and you can add URLs that should be ignored and not considered heavy.

ASM measures the tail latency of each URL and of the whole site for 24 hours to get a good sample of request behavior. A URL is considered heavy if its average tail latency is more than twice that of the site latency for the 24-hour period.

About proactive bot defense

Application Security Manager™ (ASM) can proactively defend your applications against automated attacks by web robots, called bots for short. This defense method, called proactive bot defense, can prevent layer 7 DoS attacks, web scraping, and brute force attacks from starting. By preventing bots from accessing the web site, these attacks are prevented as well.

Working together with anomaly detection and DoS protection, proactive bot defense helps identify and mitigate attacks before they cause damage to the site. Because this feature generally inspects most traffic, it affects system performance, but requires fewer resources than traditional web scraping and brute force protections. You can use proactive bot defense in addition to the web scraping and brute force protections that are available in ASM security policies. Proactive bot defense is enforced through a DoS profile and does not require a security policy.

When clients access a protected web site for the first time, the system sends a JavaScript challenge to the browser. Therefore, it is important when using this feature for clients to use browsers that allow JavaScript.

If the client successfully evaluates the challenge and resends the request with a valid cookie, the system allows the client to reach the server. Requests that do not answer the challenge remain unanswered and are not sent to the server. Requests sent to non-HTML URLs without the cookie are dropped and considered to be bots.

You can configure lists of URLs to consider safe so that the system does not need to validate them. This speeds up access time to the web site. If your application accesses many cross-domain resources and you have a list of those domains, you may want to select an option that validates cross-domain requests to those domains.

About cross-domain requests

Proactive bot defense in a DoS profile allows you to specify which cross-domain requests are legal. Cross-domain requests are HTTP requests for resources from a different domain than the domain of the resource making the request.

If your application accesses many cross-domain resources and you have a list of those domains, you can validate cross-domain requests to those domains.

For example, your web site uses two domains, site1.com (the main site) and site2.com (where resources are stored). You can configure this in the DoS profile by enabling proactive bot defense, choosing one of the Allowed configured domains options for the Cross-Domain Requests setting, and specifying both of the web sites in the list of related site domains. When the browser makes a request to site1.com, it gets cookies for both site1.com and site2.com independently and simultaneously, and cross domain requests from site1.com to site2.com are allowed.

If only site1.com is configured as a related site domain, when the browser makes a request to site1.com, it gets a cookie for site1.com only. If the browser makes a cross-domain request to get an image from site2.com, it gets a cookie and is allowed only if it already has a valid site1.com cookie.

About site-wide DoS mitigation

In order to mitigate highly distributed DoS attacks, such as those instigated using large scale botnets attacking multiple URLs, you can include site-wide mitigation in a DoS profile. You can use site-wide mitigation as part of the prevention policy for either TPS-based or latency-based DoS protection. In this case, the whole site can be considered suspicious as opposed to a particular URL or IP address. Site-wide mitigation goes into effect when the system determines that the whole site is experiencing high-volume traffic but is not able to pinpoint and handle the problem.

The system implements site-wide mitigation method only as a last resort because it may cause the system to drop legitimate requests. However, it maintains, at least partially, the availability of the web site, even when it is under attack. When the system applies site-wide mitigation, it is because all other active detection methods were unable to stop the attack.

The whole site is considered suspicious when configured thresholds are crossed, and in parallel, specific IP addresses and URLs could also be found to be suspicious. The mitigation continues until the maximum duration elapses or when the whole site stops being suspicious. That is, there are no suspicious URLs, no suspicious IP addresses, and the whole site is no longer suspicious.

About DoS protection and HTTP caching

HTTP caching enables the BIG-IP® system to store frequently requested web objects (or static content) in memory to save bandwidth and reduce traffic load on web servers. The Web Acceleration profile has the settings to configure caching.

If you are using HTTP caching along with DoS protection, you need to understand how DoS protection for cached content works. In this case, URLs serving cached content are considered a DoS attack if they exceed the relative TPS increased by percentage (and not the explicit TPS reached number). Requests to static or cacheable URLs are always mitigated by rate limiting. This is true even during periods of mitigation using client-side integrity or CAPTCHA, and when those mitigations are not only URL-based.

Overview: Preventing DoS attacks on applications

You can configure the Application Security Manager™ to protect against DoS attacks on web applications. Depending on your configuration, the system detects DoS attacks based on transactions per second (TPS) on the client side, server latency, heavy URLs, geolocation, and failed CAPTCHA response.

You configure DoS protection for Layer 7 by creating a DoS profile with Application Security enabled. You then associate the DoS profile with one or more virtual servers representing applications that you want to protect. DoS protection is not part of a security policy.

The main factors in establishing the prevention policy are:

  • Attackers: The clients that initiate the actual attacks. They are represented by their IP addresses and the geolocations they come from.
  • Servers: The web application servers that are under attack. You can view them site-wide as the pairing of the virtual server and the DoS profile, by the URL, or as a pool member.
  • BIG-IP system: The middle tier that detects attacks and associated suspicious entities, then mitigates the attacks, or blocks or drops requests depending on the options you configure in the DoS profile.

Task Summary

Configuring DoS protection for applications

You can configure Application Security Manager™ to protect against and mitigate DoS attacks, and increase system security.
  1. On the Main tab, click Security > DoS Protection > DoS Profiles .
    The DoS Profiles list screen opens.
  2. Click Create.
    The Create New DoS Profile screen opens.
  3. In the Profile Name field, type the name for the profile.
  4. Select the Application Security check box.
    The screen refreshes and displays additional configuration settings.
  5. If you have written an application DoS iRule to specify how the system handles a DoS attack and recovers afterwards, select the Trigger iRule setting.
  6. If you want to set up DoS protection from the client side, in the TPS-based Anomaly area, select an Operation Mode and set up TPS-based DoS protection.
    Another task describes how to configure the settings.
  7. If you want to set up DoS protection from the server side, in the Latency-based Anomaly area, select an Operation Mode and set up latency-based DoS protection.
    Another task describes how to configure the settings.
  8. If you want to set up protection for heavy URLs, in the Heavy URL Protection area, select Heavy URL Protection and configure the protection settings.
    Another task describes how to configure the settings.
  9. To omit certain addresses, for the IP Address Whitelist setting, type IP addresses or subnets that do not need to be examined for DoS attacks, and click Add.
    Note: You can add up to 20 IP addresses.
  10. To record traffic (perform a TCP dump) when a DoS attack is occurring, select the Record Traffic During Attacks check box, and specify the options to determine the conditions and how often to perform the dump.
    This option allows you to diagnose the attack vectors and attackers and observe whether the attack was mitigated.
    If a DoS attack occurs, the system creates a TCP dump in /shared/dosl7/tcpdumps on the virtual server where the attack was detected.
  11. If you want to set up proactive DoS protection, in the Proactive Bot Defense area, select an Operation Mode and configure the protection settings.
    Another task describes how to configure the settings.
  12. To set up DoS protection based on the country where a request originates, in the Geolocations area, select countries to allow or disallow.
    1. Move the countries for which you want the system to block traffic during a DoS attack into the Geolocation Blacklist.
    2. Move the countries that you want the system to allow (unless the requests have other problems) into the Geolocation Whitelist.
    3. Select appropriate mitigations for geolocations in the Prevention Policy settings for Latency-based or TPS-based Anomaly.
  13. Click Finished to save the DoS profile.
You have created a DoS profile that provides DoS protection.
Next, consider configuring additional levels of DoS protection such as TPS-based protection, latency-based protection, heavy URLs, and proactive bot defense. Also, the DoS profile needs to be associated with a virtual server before it protects against DoS attacks.

Configuring TPS-based DoS protection

You can configure Application Security Manager™ to mitigate DoS attacks based on transaction rates using TPS-based DoS protection.
  1. On the Main tab, click Security > DoS Protection > DoS Profiles .
    The DoS Profiles list screen opens.
  2. Click the name of an existing DoS profile (or create a new one).
    The DoS Profile Properties screen opens.
  3. Select the Application Security check box.
    The screen refreshes and displays additional configuration settings.
  4. In the TPS-based Anomaly area, for Operation Mode, select an operation mode to determine how the system reacts when it detects a DoS attack.
    Option Description
    Transparent Displays data about DoS attacks on the DoS: Application reporting screen, but does not block requests, or perform any of the client-side integrity defenses.
    Blocking Applies the necessary mitigation steps to suspicious IP addresses, URLs, geolocations, or site-wide. Also displays information about DoS attacks on the DoS: Application reporting screen.
    The screen refreshes to display additional configuration settings when you select an operation mode.
  5. For the Prevention Policy setting, select the Client-Side Integrity Defense options to determine which mitigation methods the system uses to stop DoS attacks.
    Note: The Operation Mode must be Blocking for the system to perform these defenses.
    Option Description
    Source IP-Based Sends a JavaScript challenge to each suspicious IP address (traffic that meets the IP Criteria in the anomaly). The default is disabled.
    Geolocation-Based

    Sends a JavaScript challenge to requests from a suspicious country (excluding blacklisted and whitelisted geolocations). The Geolocation Criteria in the anomaly defines the conditions for when a country is considered suspicious. The default is disabled.

    URL-Based Sends a JavaScript challenge to each suspicious URL (traffic that meets the URL Criteria in the anomaly). This setting enforces strong protection and prevents distributed DoS attacks but affects more clients. The default is disabled.
    Site-wide Sends a JavaScript challenge to suspicious traffic (traffic that meets the Site-Wide Criteria in the anomaly). The default is disabled.
    For the options selected, the client-side integrity challenge slows the attack rate. Legal browsers process the JavaScript and respond properly, whereas illegal scripts do not.
  6. For the Prevention Policy setting, select the CAPTCHA Challenge options to determine when the system requests that clients respond to a character recognition request.
    Note: If you enable more than one option, the system uses the options in the order in which they are listed.
    Option Description
    Source IP-Based Sends a CAPTCHA challenge to each suspicious IP address (that is, traffic that meets the IP Criteria in the anomaly). The default is disabled.
    Geolocation-Based Sends a CAPTCHA challenge to requests from a suspicious country (excluding blacklisted and whitelisted geolocations). The Geolocation Criteria in the anomaly defines the conditions for when a country is considered suspicious. The default is disabled.
    URL-Based Sends a CAPTCHA challenge to each suspicious URL (that is, traffic that meets the URL Criteria in the anomaly). The default is disabled.
    Site-wide Sends a CAPTCHA challenge to suspicious traffic (that is, traffic that meets the Site-Wide Criteria in the anomaly). The default is disabled.
    For the options selected, the CAPTCHA challenge determines whether a client is a human or an illegal script. Legal clients can process the challenge and respond properly, whereas illegal scripts do not.
  7. For the Prevention Policy setting, select the Request Blocking options to determine when the system should block requests during a DoS attack.
    Important: If you want to use Request Blocking options, the anomaly’s Operation Mode must be set to Blocking. If set to Transparent, the system reports, but does not block, suspicious requests.
    Option Description
    Source IP-Based Select Rate Limit to randomly block requests from suspicious IP addresses. The system limits the rate of requests to the average rate prior to the attack, or lower than the absolute threshold specified by the IP criteria TPS reached setting. Select Block All to block all requests from suspicious IP addresses. The default is enabled and set to Rate Limit.
    Geolocation-Based Select Rate Limit to randomly block requests from suspicious countries (those that meet the Geolocation Criteria in the anomaly). The system allows requests from that country when its request rate per second is less than the legitimate history interval (before the attack started). Select Block All to block all requests from suspicious countries, except geolocations in the whitelist. The default is disabled.
    URL-Based Rate Limit Indicates that when the system detects URLs under attack (those that meet the URL Criteria in the anomaly), the system drops connections to limit the rate of requests to the URL to the average rate prior to the attack. The default is enabled.
    Site-wide Rate Limit Indicates that the system drops requests for the website as a whole if suspected of being under attack (if the website meets the Site-Wide Criteria in the anomaly). The system allows requests for that site when the request rate per second is less than the legitimate history interval (before the attack started), or less than the threshold you configure in the TPS reached setting. The default is enabled.
    Note: If you enable more than one option, the system uses the options in the order in which they are listed.
  8. For IP Detection Criteria, modify the threshold values as needed.
    Note: This setting appears if at least one of these Prevention Policy settings is selected: Source IP-Based in Client Side Integrity Defense, Source IP-Based in the CAPTCHA challenge, or Source IP-Based Rate Limit in Request Blocking.
    If any of these criteria is met, the system handles the attack according to the Prevention Policy settings.
    Option Description
    TPS increased by Specifies that the system considers an IP address to be that of an attacker if the transactions sent per second have increased by this percentage, and the detected TPS is greater than the Minimum TPS Threshold for detection. The default value is 500%.
    TPS reached Specifies that the system considers an IP address to be suspicious if the number of transactions sent per second from an IP address equals, or is greater than, this value. This setting provides an absolute value, so, for example, if an attack increases the number of transactions gradually, the increase might not exceed the TPS increased by threshold and would not be detected. If the TPS reaches the TPS reached value, the system considers traffic to be an attack even if it did not meet the TPS increased by value. The default value is 200 requests per second.
    Minimum TPS Threshold for detection Specifies that the system considers an IP address to be an attacker if the detected TPS for a specific IP address equals, or is greater than, this number, and the TPS increased by number was reached. The default setting is 40 transactions per second.
    Tip: Click the Set default criteria link to reset these settings to their default values.

    If these thresholds are reached the system treats the IP address as an attacker, and prevents further attacks by limiting the number of requests per second to the history interval.

  9. For Geolocation Detection Criteria, modify the threshold values as needed.
    Note: This setting appears only if one of the Geolocation-based options is selected in the Prevention Policy.
    Option Description
    Geolocation traffic share increased by Specifies that a country should be considered suspicious if the number of requests from that country has increased by this percentage. The default value is 500%.
    Geolocation traffic share is at least

    Specifies that a country should be considered suspicious if, of all the requests to the web application, the number of requests from that country is at least this percentage. The default value is 10%.

    If both of these criteria are met, the system treats traffic from the country as an attack, and limits the number of requests per second to the history interval.
  10. For URL Detection Criteria, modify the threshold values for when the system treats a URL to be under attack.
    Note: This setting appears only if Prevention Policy is set to URL-Based for Client Side Integrity Defense or CAPTCHA Challenge, or URL-Based Rate Limit for Request Blocking.
    Option Description
    TPS increased by Specifies that the system considers a URL to be that of an attacker if the transactions sent per second to the URL have increased by this percentage, and the detected TPS is greater than the Minimum TPS Threshold for detection. The default value is 500%.
    TPS reached Specifies that the system considers a URL to be suspicious if the number of transactions sent per second to the URL is equal to or greater than this value. This setting provides an absolute value, so, for example, if an attack increases the number of transactions gradually, the increase might not exceed the TPS increased by threshold and would not be detected. If the TPS reaches the TPS reached value, the system considers traffic to be an attack even if it did not meet the TPS increased by value. The default value is 1000 TPS.
    Minimum TPS Threshold for detection Specifies that the system considers a URL to be an attacker if the detected TPS for a specific URL equals, or is greater than, this number, and the TPS increased by number was reached. The default setting is 200 transactions per second.
    If any of these criteria is met, the system handles the attack according to the Prevention Policy settings.
  11. For Site-Wide Detection Criteria, modify the threshold values for when the system treats a website as being under attack.
    Note: This setting appears only if using site-wide prevention policies.
    Option Description
    TPS increased by Specifies that the system considers a whole site to be under attack if the transactions sent per second have increased by this percentage, and the detected TPS is greater than the Minimum TPS Threshold for detection. The default value is 500%.
    TPS reached Specifies that the system considers a whole site to be under attack if the number of requests sent per second is equal to or greater than this number. The default value is 10000 TPS.
    Minimum TPS Threshold for detection Specifies that the system considers a whole site to be under attack if the detected TPS is equal to or greater than this number, and the TPS increased by number was reached. The default setting is 2000 TPS.
    If any of these criteria is met, the system handles the attack according to the Prevention Policy settings. This mitigation method is used last because it may drop some legitimate requests.
  12. For the Prevention Duration setting, specify the time spent in each mitigation step until deciding to move to the next mitigation step.
    Option Description
    Escalation Period Specifies the minimum time spent in each mitigation step before the system moves to the next step when preventing attacks against an attacker IP address or attacked URL. During a DoS attack, the system performs attack prevention for the amount of time configured here for methods enabled in the Prevention Policy. If after this period the attack is not stopped, the system enforces the next enabled prevention step. Type a number between 1 and 3600. The default is 120 seconds.
    De-escalation Period Specifies the time spent in the final escalation step until retrying the steps using the methods enabled in the Prevention Policy. Type a number (greater than the escalation period) between 0 (meaning the steps are never retried) and 86400 seconds. The default value is 7200 seconds (2 hours).
    DoS mitigation is reset after 2 hours even if the detection criteria still hold regardless of the value set for the De-escalation Period. If the attack is still taking place, a new attack occurs and mitigation starts over retrying the steps in the Prevention Policy. If you set the De-escalation Period to less than 2 hours, the reset occurs more frequently.
  13. Click Update to save the DoS profile.
You have now configured a DoS profile to prevent DoS attacks based on the client side (TPS-based Detection Mode).
Next, you need to associate the DoS profile with the application’s virtual server. You also have the option of configuring latency-based protection, heavy URL protection, or proactive defense.

Configuring latency-based DoS protection

You can configure Application Security Manager™ to mitigate Layer 7 DoS attacks based on server latency.
  1. On the Main tab, click Security > DoS Protection > DoS Profiles .
    The DoS Profiles list screen opens.
  2. Click the name of an existing DoS profile (or create a new one).
    The DoS Profile Properties screen opens.
  3. Select the Application Security check box.
    The screen refreshes and displays additional configuration settings.
  4. In the Latency-based Anomaly area, for Operation Mode, select an operation mode to determine how the system reacts when it detects a DoS attack.
    Option Description
    Transparent Displays data about DoS attacks on the DoS: Application reporting screen but does not block requests.
    Blocking Applies the necessary mitigation steps to suspicious IP addresses, URLs, geolocations, or site-wide. Also displays information about DoS attacks on the DoS: Application reporting screen.
    The screen refreshes to display additional configuration settings when you select an operation mode.
  5. For Detection Criteria, modify the threshold values as needed.
    If any of these criteria is met, the system handles the attack according to the Prevention Policy settings.
    Option Description
    Latency increased by Specifies that the system considers traffic to be an attack if the latency has increased by this percentage, and the minimum latency threshold has been reached. The default value is 500%.
    Latency reached Specifies that the system considers traffic to be an attack if the latency is greater than this value. This setting provides an absolute value, so, for example, if an attack increases latency gradually, the increase might not exceed the Latency Increased by threshold and would not be detected. If server latency reaches the Latency reached value, the system considers traffic to be an attack even if it did not meet the Latency increased by value. The default value is 10000 ms.
    Minimum Latency Threshold for detection Specifies that the system considers traffic to be an attack if the detection interval for a specific URL equals, or is greater than, this number, and at least one of the Latency increased by numbers was reached. The default setting is 200 ms.
    Tip: Click the Set default criteria link to reset these settings to their default values.
  6. For the Prevention Policy setting, select the Client-Side Integrity Defense options to determine which mitigation methods the system uses to stop DoS attacks.
    Note: The Operation Mode must be Blocking for the system to perform these defenses.
    Option Description
    Source IP-Based Sends a JavaScript challenge to each suspicious IP address (traffic that meets the IP Criteria in the anomaly). The default is disabled.
    Geolocation-Based

    Sends a JavaScript challenge to requests from a suspicious country (excluding blacklisted and whitelisted geolocations). The Geolocation Criteria in the anomaly defines the conditions for when a country is considered suspicious. The default is disabled.

    URL-Based Sends a JavaScript challenge to each suspicious URL (traffic that meets the URL Criteria in the anomaly). This setting enforces strong protection and prevents distributed DoS attacks but affects more clients. The default is disabled.
    Site-wide Sends a JavaScript challenge to suspicious traffic (traffic that meets the Site-Wide Criteria in the anomaly). The default is disabled.
    For the options selected, the client-side integrity challenge slows the attack rate. Legal browsers process the JavaScript and respond properly, whereas illegal scripts do not.
  7. For the Prevention Policy setting, select the CAPTCHA Challenge options to determine when the system requests that clients respond to a character recognition request.
    Note: If you enable more than one option, the system uses the options in the order in which they are listed.
    Option Description
    Source IP-Based Sends a CAPTCHA challenge to each suspicious IP address (that is, traffic that meets the IP Criteria in the anomaly). The default is disabled.
    Geolocation-Based Sends a CAPTCHA challenge to requests from a suspicious country (excluding blacklisted and whitelisted geolocations). The Geolocation Criteria in the anomaly defines the conditions for when a country is considered suspicious. The default is disabled.
    URL-Based Sends a CAPTCHA challenge to each suspicious URL (that is, traffic that meets the URL Criteria in the anomaly). The default is disabled.
    Site-wide Sends a CAPTCHA challenge to suspicious traffic (that is, traffic that meets the Site-Wide Criteria in the anomaly). The default is disabled.
    For the options selected, the CAPTCHA challenge determines whether a client is a human or an illegal script. Legal clients can process the challenge and respond properly, whereas illegal scripts do not.
  8. For the Prevention Policy setting, select the Request Blocking options to determine when the system should block requests during a DoS attack.
    Important: If you want to use Request Blocking options, the anomaly’s Operation Mode must be set to Blocking. If set to Transparent, the system reports, but does not block, suspicious requests.
    Option Description
    Source IP-Based Select Rate Limit to randomly block requests from suspicious IP addresses. The system limits the rate of requests to the average rate prior to the attack, or lower than the absolute threshold specified by the IP criteria TPS reached setting. Select Block All to block all requests from suspicious IP addresses. The default is enabled and set to Rate Limit.
    Geolocation-Based Select Rate Limit to randomly block requests from suspicious countries (those that meet the Geolocation Criteria in the anomaly). The system allows requests from that country when its request rate per second is less than the legitimate history interval (before the attack started). Select Block All to block all requests from suspicious countries, except geolocations in the whitelist. The default is disabled.
    URL-Based Rate Limit Indicates that when the system detects URLs under attack (those that meet the URL Criteria in the anomaly), the system drops connections to limit the rate of requests to the URL to the average rate prior to the attack. The default is enabled.
    Site-wide Rate Limit Indicates that the system drops requests for the website as a whole if suspected of being under attack (if the website meets the Site-Wide Criteria in the anomaly). The system allows requests for that site when the request rate per second is less than the legitimate history interval (before the attack started), or less than the threshold you configure in the TPS reached setting. The default is enabled.
    Note: If you enable more than one option, the system uses the options in the order in which they are listed.
  9. For Suspicious IP Criteria, modify the threshold values as needed.
    Note: This setting appears if at least one of these Prevention Policy settings is selected: Source IP-Based for Client Side Integrity Defense or the CAPTCHA challenge, Source IP-Based Rate Limit for Request Blocking.
    Option Description
    TPS increased by Specifies that the system considers an IP address to be that of an attacker if the transactions sent per second have increased by this percentage, and the detected TPS for a specific IP address is equal to or greater than the Minimum TPS Threshold. The default value is 500%.
    TPS reached Specifies that the system considers an IP address to be suspicious if the number of transactions sent per second from an IP address equals, or is greater than, this value. This setting provides an absolute value, so, for example, if an attack increases the number of transactions gradually, the increase might not exceed the TPS increased by threshold and would not be detected. If the TPS reaches the TPS reached value, the system considers traffic to be an attack even if it did not meet the TPS increased by value. The default value is 200 TPS.
    Minimum TPS Threshold for detection Specifies that the system considers an IP address to be an attacker if the detected TPS for a specific IP address equals, or is greater than, this number, and the TPS increased by number was reached. The default setting is 40 transactions per second.
    If any of these criteria is met, the system handles the attack according to the Prevention Policy settings.
  10. For Suspicious Geolocation Criteria, modify the threshold values as needed.
    Note: This setting appears only if one of the Geolocation-Based options is selected in the Prevention Policy.
    Option Description
    Geolocation traffic share increased by Specifies that the system considers a country to be suspicious if the number of requests from a country has increased by this percentage. The default value is 500%.
    Geolocation traffic share is at least

    Specifies that a country should be considered suspicious if, of all the requests to the web application, the number of requests from that country is at least this percentage. The default value is 10%.

    If both of these criteria are met, the system treats traffic from the country as an attack, and limits the number of requests per second to the history interval.
  11. For Suspicious URL Criteria, modify the threshold values as needed.
    Note: This setting appears if at least one of these Prevention Policy settings is selected: URL-Based for Client Side Integrity Defense or CAPTCHA Challenge, or Source IP-Based Rate Limit for Request Blocking.
    Option Description
    TPS increased by Specifies that the system considers a URL to be an attacker if the transactions sent per second sent to the URL have increased by this percentage, and the detected TPS for a specific IP address is equal to or greater than the Minimum TPS Threshold. The default value is 500%.
    TPS reached Specifies that the system considers a URL to be suspicious if the number of transactions sent per second to the URL is equal to or greater than this value. This setting provides an absolute value, so, for example, if an attack increases the number of transactions gradually, the increase might not exceed the TPS increased by threshold and would not be detected. If the TPS reaches the TPS reached value, the system considers traffic to be an attack even if it did not meet the TPS increased by value. The default value is 1000 TPS.
    Minimum TPS Threshold for detection Specifies that the system considers a URL to be an attacker if the detected TPS for a specific URL equals, or is greater than, this number, and the TPS increased by number was reached. The default setting is 40 transactions per second.
    If any of these criteria is met, the system handles the attack according to the Prevention Policy settings.
  12. For Suspicious Site-Wide Criteria, modify the threshold values as needed.
    Note: This setting appears only if using site-wide prevention policies.
    Option Description
    TPS increased by Specifies that the system considers a whole site to be under attack if the transactions sent per second have increased by this percentage, and the detected TPS for a specific IP address is equal to or greater than the Minimum TPS Threshold. The default value is 500%.
    TPS reached Specifies that the system considers a whole site to be under attack if the number of requests sent per second is equal to or greater than this number. The default value is 10000 TPS.
    Minimum TPS Threshold for detection Specifies that the system considers a whole site to be under attack if the detected TPS is equal to or greater than this number, and the TPS increased by number was reached. The default setting is 2000 TPS.
    If any of these criteria is met, the system handles the attack according to the Prevention Policy settings.
  13. For the Prevention Duration setting, specify the time spent in each mitigation step until deciding to move to the next mitigation step.
    Option Description
    Escalation Period Specifies the minimum time spent in each mitigation step before the system moves to the next step when preventing attacks against an attacker IP address or attacked URL. During a DoS attack, the system performs attack prevention for the amount of time configured here for methods enabled in the Prevention Policy. If after this period the attack is not stopped, the system enforces the next enabled prevention step. Type a number between 1 and 3600. The default is 120 seconds.
    De-escalation Period Specifies the time spent in the final escalation step until retrying the steps using the methods enabled in the Prevention Policy. Type a number (greater than the escalation period) between 0 (meaning the steps are never retried) and 86400 seconds. The default value is 7200 seconds (2 hours).
    DoS mitigation is reset after 2 hours even if the detection criteria still hold regardless of the value set for the De-escalation Period. If the attack is still taking place, a new attack occurs and mitigation starts over retrying the steps in the Prevention Policy. If you set the De-escalation Period to less than 2 hours, the reset occurs more frequently.
  14. Click Update to save the DoS profile.
You have now configured a DoS profile to prevent DoS attacks based on server latency.
Next, associate the DoS profile with the application’s virtual server. You also have the option of configuring heavy URL protection.

Configuring heavy URL protection

To use heavy URL protection, F5 recommends that you configure latency-based anomaly settings in the DoS profile. That way the system can detect low-volume attacks on heavy URLs when no other high-volume attacks are underway. Also, you must enable at least one of the URL-based prevention policy methods in the TPS-based Anomaly or Latency-based Anomaly settings in the DoS profile.
You can configure Application Security Manager™ (ASM) to prevent DoS attacks on heavy URLs. Heavy URLs are URLs on your application web site that may consume considerable resources under certain conditions. By tracking URLs that are potentially heavy, you can mitigate DoS attacks on these URLs before response latency exceeds a specific threshold.
  1. On the Main tab, click Security > DoS Protection > DoS Profiles .
    The DoS Profiles list screen opens.
  2. Click the name of an existing DoS profile (or create a new one).
    The DoS Profile Properties screen opens.
  3. Select the Application Security check box.
    The screen refreshes and displays additional configuration settings.
  4. Select the Heavy URL Protection check box.
    The screen displays additional configuration settings.
  5. To automatically detect heavy URLs, select the Automatic Detection check box.
    Tip: You may want to hold off selecting this option until after observing normal traffic for a day or two so you can assign a reasonable latency threshold value.
    The system detects heavy URLs by measuring the latency tail ratio, which is the number of transactions whose latency is consistently greater than the latency threshold. A URL is considered heavy if its latency tail ratio is considerably above the global average, in the long run (default of 24 hours).
  6. In the Heavy URLs setting, add the URLs that you expect to be heavy (have high latency) at times in the form /query.html.
    If you are not sure which URLs to add, leave this list blank and let the system automatically detect heavy URLs by using automatic detection.
  7. In the Ignored URLs (Wildcards Supported) setting, add the URLs that you never want the system to consider heavy.
    The URLs in this list may include wildcards.
  8. If using automatic detection, in the Latency Threshold field, type the number of milliseconds for the system to use as the threshold for automatically detecting heavy URLs.
    The default value is 1000 milliseconds.
  9. Click Update to save the DoS profile.
You have now configured a DoS profile that includes heavy URL protection. Heavy URLs are detected based on latency. ASM tracks the probability distribution of server latency, which is called heavy tailed.
To validate automatic detection, you can view the URL Latencies report periodically to check that the latency threshold that you used is close to the value in the latency histogram column for all traffic. You should set the the latency threshold so that approximately 95% of the requests for the virtual server have lower latency.

By reviewing the URL Latencies report and sorting the URLs listed by latency, you can make sure that the URLs that you expect to be heavy are listed in the DoS profile. Also, if the system detects too many (or too few) heavy URLs, you can increase (or decrease) the latency threshold.

Configuring CAPTCHA for DoS protection

You can configure a CAPTCHA challenge as part of the prevention policy for both TPS-based and latency-based DoS protection. A CAPTCHA (or visual character recognition) challenge determines whether the client is human or an illegal script.
  1. On the Main tab, click Security > DoS Protection > DoS Profiles .
    The DoS Profiles list screen opens.
  2. Click the name of an existing DoS profile (or create a new one).
    The DoS Profile Properties screen opens.
  3. Select the Application Security check box.
    The screen refreshes and displays additional configuration settings.
  4. Configure TPS-based or latency-based DoS protection, or both.
    Other tasks describe how to do this in detail.
  5. For the Prevention Policy setting, select the CAPTCHA Challenge options to determine when the system requests that clients respond to a character recognition request.
    Note: If you enable more than one option, the system uses the options in the order in which they are listed.
    Option Description
    Source IP-Based Sends a CAPTCHA challenge to each suspicious IP address (that is, traffic that meets the IP Criteria in the anomaly). The default is disabled.
    Geolocation-Based Sends a CAPTCHA challenge to requests from a suspicious country (excluding blacklisted and whitelisted geolocations). The Geolocation Criteria in the anomaly defines the conditions for when a country is considered suspicious. The default is disabled.
    URL-Based Sends a CAPTCHA challenge to each suspicious URL (that is, traffic that meets the URL Criteria in the anomaly). The default is disabled.
    Site-wide Sends a CAPTCHA challenge to suspicious traffic (that is, traffic that meets the Site-Wide Criteria in the anomaly). The default is disabled.
    For the options selected, the CAPTCHA challenge determines whether a client is a human or an illegal script. Legal clients can process the challenge and respond properly, whereas illegal scripts do not.
  6. In the CAPTCHA Response Settings area, specify the text the system sends as a challenge to users.
    Note: This setting appears only if one or more of the CAPTCHA Challenge options is selected in the Prevention Policy.
    1. From the First Response Type list, select Default to use the default challenge, or Custom if you want to change the text.
    2. If customizing the text, edit the text (HTML) in the First Response Body field.

      You can use the following variables within the challenge or response.

      Variable Use
      %DOSL7.captcha.image% Displays the CAPTCHA image in data URI format.
      %DOSL7.captcha.change% Displays the change CAPTCHA challenge icon.
      %DOSL7.captcha.solution% Displays the solution text box.
      %DOSL7.captcha.submit% Displays the Submit button.
    3. Click Show to see what it looks like.
  7. In the CAPTCHA Response Settings area, specify the text the system sends to users if they fail to respond correctly to the CAPTCHA challenge.
    1. From the Failure Response Type list, select Default to use the default response, or Custom if you want to change the text.
    2. If customizing the text, edit the text in the Failure Response Body field.
      You can use the same variables in the text to send a second challenge.
    3. Click Show to see what it looks like.
  8. Click Update to save the DoS profile.
You have now configured a CAPTCHA challenge for potential DoS attackers that helps with filtering out bots. The system sends a character recognition challenge only on the first request of a client session. If it is solved correctly, the request is sent to the server. Subsequent requests in the session do not include the challenge. If the client fails the first challenge, the CAPTCHA response is sent. If that also fails, the client is handled according to the prevention policy options selected in the DoS profile.

Recording traffic during DoS attacks

If you have DoS protection enabled, you can configure the system to record traffic during DoS attacks. By reviewing the recorded traffic in the form of a TCP dump, you can diagnose the attack vectors and attackers, observe whether and how the attack was mitigated, and determine whether you need to change the DoS protection configuration.
  1. On the Main tab, click Security > DoS Protection > DoS Profiles .
    The DoS Profiles list screen opens.
  2. Click the name of an existing DoS profile (or create a new one).
    The DoS Profile Properties screen opens.
  3. Select the Application Security check box.
    The screen refreshes and displays additional configuration settings.
  4. Toward the bottom of the screen, select the Record Traffic During Attacks check box.
    The screen refreshes and displays additional configuration settings.
  5. For Maximum TCP Dump Duration, type the maximum number of seconds (from 1 - 300) for the system to record traffic during a DoS attack.
    The default value is 30 seconds.
  6. For Maximum TCP Dump Size, type the maximum size (from 1 - 50) allowed for the TCP dump.
    When the maximum size is reached, the dump is complete. The default value is 10 MB.
  7. For TCP Dump Repetition, specify how often to perform TCP dumps during a DoS attack:
    • To record traffic once during an attack, select Dump once per attack.
    • To record traffic periodically during an attack, select Repeat dump after and type the number of seconds (between 1 - 3600) for how long to wait after completing a TCP dump before starting the next one.
  8. Click Update to save the DoS profile.
When the system detects a DoS attack, it performs a TCP dump to record the traffic on the virtual server where the attack occurred. The files are located on the system in /shared/dosl7/tcpdumps. The name of the file has the format: <yyyy_mm_dd_hh:mm:ss>-<attack_ID>-<seq_num>.pcap, including the time the dump started, the ID of the attack in logs and reports, and the number of the TCP dump since the attack started. If traffic being recorded is SSL traffic, it is recorded encrypted.
If working with F5 support, you can collect the TCP dump files into a QuickView file so that support personnel can help determine the cause of the DoS attack, and recommend ways of preventing future attacks.

Configuring proactive bot defense

To use proactive bot defense, client browsers accessing your web site must be able to accept JavaScript.
You can configure Application Security Manager™ (ASM) to proactively protect your web site against attacks by web robots (called bots, for short). Proactive bot defense checks all traffic (except whitelisted URLs) coming to the web site, not simply suspicious traffic.
  1. On the Main tab, click Security > DoS Protection > DoS Profiles .
    The DoS Profiles list screen opens.
  2. Click the name of an existing DoS profile (or create a new one).
    The DoS Profile Properties screen opens.
  3. Select the Application Security check box.
    The screen refreshes and displays additional configuration settings.
  4. In the Proactive Bot Defense area, select the Operation Mode to use.
    Option Description
    During Attacks Implements proactive bot defense by checking all traffic during a DoS attack.
    Always Implements proactive bot defense at all times by checking all traffic, and prevents DoS attacks from starting.
    Off Disables proactive defense.
  5. In the Grace Period field, type the number of seconds to wait before the system begins bot detection.

    The grace period allows web pages (including complex pages such as those which include images, JS, and CSS) the time to be recognized, receive a signed cookie, and completely load without unnecessary request drops. The default value is 300 seconds.

    The grace period begins after the signed cookie is renewed, a change made to the configuration, or after proactive bot defense starts as a result of a detected DoS attack or high latency.

  6. From the Cross-Domain Requests setting, specify how the system validates cross-domain requests (such as requests for non-HTML resources like images, CSS, XML, JavaScript, or Flash). Cross-domain requests are requests with different domains in the Host and Referrer headers.
    Option Description
    Allow all requests

    Allows requests arriving to a non-HTML URL referred by a different domain and without a valid cookie if they pass a simple challenge. The system sends a challenge that tests basic browser capabilities, such as HTTP redirects and cookies.

    Allow configured domains; validate in bulk Allows requests to other related internal or external domains that are configured in this section, and validates the related domains in advance. The requests to related site domains must include a valid cookie from one of the site domains; the external domains are allowed if they pass a simple challenge. Choose this option if your web site does not use many domains, and it is a good idea to include them all in the lists below.
    Allow configured domains; validate upon request Allows requests to other related internal or external domains that are configured in this section. The requests to related site domains must include a valid cookie from the main domain (in the list below); the external domains are allowed if they pass a simple challenge. Choose this option if your web site uses many domains, and list one main domain in the list below.
  7. If you selected one of the Allow configured domains options in the last step, you need to add Related Site Domains that are part of your web site, and Related External Domains that are allowed to link to resources in your web site.
  8. In the URL Whitelist setting, add the URLs to which the web site expects to receive requests and that you want the system to consider safe.
    Type URLs in the form /index.html.
    The system does not perform proactive bot defense on requests to the URLs in this list.
  9. Click Update to save the DoS profile.
You have now configured proactive bot defense which protects against DDoS, web scraping, and brute force attacks (on the virtual servers that use this DoS profile).

The system sends a JavaScript challenge to traffic accessing the site for the first time. Legitimate traffic answers the challenge correctly, and resends the request with a valid cookie; then it is allowed to access the server. The system drops requests sent by browsers that do not answer the system’s initial JavaScript challenge (considering those requests to be bots).

If proactive bot detection is always running, ASM™ filters out bots before they manage to build up an attack on the system and cause damage. If using proactive bot defense only during attacks, once ASM detects a DoS attack, the system uses proactive bot defense for the duration of the attack. Proactive bot defense is used together with the active mitigation method. Any request that is not blocked by the active mitigation method still has to pass the proactive bot defense mechanism to be able to reach the server.

Associating a DoS profile with a virtual server

You must first create a DoS profile separately, to configure denial-of-service protection for applications, the DNS protocol, or the SIP protocol.
You add denial-of-service protection to a virtual server to provide enhanced protection from DoS attacks, and track anomalous activity on the BIG-IP® system.
  1. On the Main tab, click Local Traffic > Virtual Servers .
    The Virtual Server List screen opens.
  2. Click the name of the virtual server you want to modify.
  3. In the Destination Address field, type the IP address in CIDR format.
    The supported format is address/prefix, where the prefix length is in bits. For example, an IPv4 address/prefix is 10.0.0.1 or 10.0.0.0/24, and an IPv6 address/prefix is ffe1::0020/64 or 2001:ed8:77b5:2:10:10:100:42/64. When you use an IPv4 address without specifying a prefix, the BIG-IP® system automatically uses a /32 prefix.
  4. From the Security menu, choose Policies.
  5. To enable denial-of-service protection, from the DoS Protection Profile list, select Enabled, and then, from the Profile list, select the DoS profile to associate with the virtual server.
  6. Click Update to save the changes.
DoS protection is now enabled, and the DoS Protection profile is associated with the virtual server.

Implementation Result

When you have completed the steps in this implementation, you have configured the Application Security Manager™ (ASM) to protect against L7 DoS attacks. If using proactive bot defense, ASM™ protects against DDoS, web scraping, and brute force attacks (on the virtual servers that use this DoS profile) before the attacks can harm the system. Depending on the configuration, the system may also detect DoS attacks based on transactions per second (TPS) on the client side, server latency, or both.

In TPS-based detection mode, if the ratio of the transaction rate during the history interval is greater than the TPS increased by percentage, the system considers the URL to be under attack, the IP address or country to be suspicious, or possibly the whole site to be suspicious.

In latency-based detection mode, if there is a latency increase and at least one suspicious IP address, country, URL, or heavy URL, the system considers the URL to be under attack, the IP address or country to be suspicious, or possibly the whole site to be suspicious.

If you enabled heavy URL protection, the system tracks URLs that consume higher than average resources and mitigates traffic that is going to those URLs.

If you chose the blocking operation mode, the system applies the necessary mitigation steps to suspicious IP addresses, URLs, or geolocations, or applies them site-wide. If using the transparent operation mode, the system reports DoS attacks but does not block them.

If using iRules®, when the system detects a DoS attack based on the configured conditions, it triggers an iRule and responds to the attack as specified in the iRule code.

After traffic is flowing to the system, you can check whether DoS attacks are being prevented, and investigate them by viewing DoS event logs and reports.