Applies To:

Show Versions Show Versions

Manual Chapter: Manually Configuring Security Policies
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

The core of the Application Security Manager security functionality is the security policy, a collection of settings that secures traffic for a web application. The security policy defines which traffic, including which file types, URLs, and parameters, can access the web application.
When the Application Security Manager (ASM) receives a request for the web application, the system compares the request to the active security policy. If the request complies with the security policy, the system forwards the request to the web application.
If the request does not comply with the security policy, the system generates a violation (or violations), and then either forwards the request or blocks the request, depending on the enforcement mode of the security policy and the blocking settings on the violations.
Run the Deployment wizard.
Using the Deployment wizard, you can create a security policy based on one of several typical deployment scenarios.
Review outstanding configuration tasks.
By using the Overview Summary screen, you can see a list of outstanding tasks (such as whether a signature update is available), policy building status, and links to tasks recommended for each security policy.
Periodically review the security policy details.
To ensure that the security policy is providing adequate application security, review the requests, charts, and statistics.
1.
On the Main tab, expand Security, point to Application Security and click Security Policies.
The Active Policies screen opens.
2.
Click Create.
The Deployment wizard opens.
3.
Follow through the screens of the wizard.
The Description area of each wizard screen provides additional information about the screen. The online help describes each of the options on the screen.
For information about creating security policies, refer to the BIG-IP® Application Security Manager: Getting Started Guide. For details about maintaining security policies, refer to BIG-IP® Application Security Manager: Implementations.
Important: The remainder of this chapter describes the individual configuration tasks that you can perform if you are manually developing a security policy. If you are using automatic policy building, the Real Traffic Policy Builder® performs most of these tasks for you. In that case, refer to Chapter.
You can access a security policy for editing either from the Active Policies screen or from the editing context area. The editing context area, shown in Figure 3.1, appears at the top of almost every security policy component screen throughout Application Security Manager.
1.
On the Main tab, expand Security, point to Application Security, 2, Building a Security Policy Automaticallyand click a security policy.
4.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
Tip: To quickly access the Properties screen for a security policy, click the Current edited policy link in the editing context area.
The policy properties are the options and settings that generally define a security policy. You can view and modify the properties of a security policy that you created with the Deployment wizard.
Note: Whenever you change a security policy, you must apply the security policy to put the changes you made into effect. To remind you that you need apply the policy, the system displays the message Changes have not been applied yet next to the Apply Policy button.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
3.
In the Security Policy Description field, type a description.
4.
Click the Save button to save any changes you may have made.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
Transparent
In transparent mode, blocking is disabled for the security policy, and you cannot set the violations to block on the Blocking screen. Traffic is not blocked even if a violation is triggered. You can use this mode and staging when you first put a security policy into effect to make sure that no false positives occur that would stop legitimate traffic.
Blocking
In blocking mode, blocking is enabled for the security policy, and you can enable or disable the Block flag for individual violations.
Traffic is blocked when a violation occurs if the following conditions are met: you configure the system to block that type of violation, the enforcement readiness period is over, you removed all entities (explicit and wildcard) whose enforcement readiness period is over from staging, and deleted wildcard entities with learn explicit entities enabled from the security policy. You can use this mode when you are ready to enforce the security policy.
You can change the enforcement mode for a security policy on the Policy Properties screen or the Application Security: Blocking: Settings screen.
When the system receives an incoming request that complies with the security policy, the traffic is always forwarded to the destination, regardless of the mode the security policy is in.
When the system receives an incoming request that does not comply with the security policy, the system generates violations. What happens to the traffic depends on whether the Learn, Alarm, or Block flag is set for the violation that occurred, and whether or not an entity in the request is in staging. When first created, you can put an entity in staging where the system can learn its properties (if the Learn flag is set), and traffic including the entity is not blocked. The system can also log the violations (if the Alarm flag is set). After the enforcement readiness period is over, requests causing violations with the Block flag set are blocked.
Table 3.1 describes what happens in each mode when an incoming request does not comply with the security policy, and generates a violation.
Block Flag for the Violation That Occurred
Traffic is blocked (unless the violation involves an entity that is in staging). The system sends the blocking response page to the client, advises the client that the request was blocked, and provides a support ID number for the violating request.
Not enabled (and no other violation with Block enabled occurred)
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
3.
In the Configuration area, for the Enforcement Mode setting, select either Transparent or Blocking.
4.
Click Save to save any changes you may have made to the security policy properties.
For each security policy, you can configure the number of days used as the enforcement readiness period. Security policy entities and attack signatures remain in staging for this period of time before the system suggests that you enforce them. The security policy provides suggestions when requests match the attack signatures, or do not adhere to the security policy entity's settings. During the enforcement readiness period, the system does not block that traffic, even if those requests trigger violations against the security policy.
Note: If the Policy Builder meets the required traffic threshold and runs after the enforcement readiness period is over, the Policy Builder automatically enables the security policy entities and the attack signatures that did not cause violations during the period.
If you enable learn explicit entities on the wildcard entities, the system learns the explicit file types, parameters, or URLs that the web application uses. You can review the new entities and decide which are legitimate entities for the web application, and accept them into the security policy. For more information about the enforcement readiness period for wildcard entities, see Understanding staging and explicit learning for wildcard entities.
1.
On the Main tab, expand Security, point to Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
Click the name of the security policy for which you want to adjust the enforcement readiness period.
The Properties screen opens.
3.
In the Configuration area, for the Enforcement Readiness Period setting, type the number of days you want the entities or signatures to be in staging; this is also how long you want the security policy to learn explicit entities for wildcards (in Add All Entities mode). The default value is 7 days.
4.
Click Save to save any changes you may have made to the security policy properties.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area near the top of the screen.
For each security policy, you can enable or disable staging for attack signatures. By default, attack signature staging is enabled.
When the staging feature is enabled, the system places all newly assigned and newly updated signatures in staging for the number of days specified in the staging period. The system does not enforce signatures that are in staging, even if it detects a violation. Instead, the system records the request information. If staging is disabled, the system enforces the signature Learn, Alarm, and Block settings immediately.
1.
On the Main tab, expand Security, point to Application Security, Attack Signatures, then click Attack Signatures Configuration.
The Attack Signatures Configuration screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
In the Configuration area, for Signature Staging, specify your staging preference:
Select the Enabled check box to enforce staging on new or changed signatures. (This is the default setting.)
Clear the Enabled check box to disable signature staging.
All security policy signatures are not in staging, regardless of the staging configuration of each individual signature, and the system enforces the signatures Learn/Alarm/Block settings immediately.
4.
Click Save to save any changes you may have made to the security policy properties.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area near the top of the screen.
When you first create a security policy using the Deployment wizard, you have the option of making a security policy case-sensitive when configuring its properties. By default, the option Security Policy is case sensitive is selected, and the security policy treats file types, URLs, and parameters as case-sensitive.
You can disable the setting so that the security policy elements are not case-sensitive only when initially creating the policy. You cannot change the case-sensitivity of a security policy after you finish running the Deployment wizard. When not case-sensitive, the system stores all security policy elements in lowercase in the security policy configuration.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
3.
Review the Security Policy is case sensitive setting.
If the value is Yes, the security policy is case-sensitive; if the value is No, the policy is not case-sensitive.
Note: You cannot change this setting after a security policy is created.
4.
Click Cancel when you are done.
You can determine whether a security policy differentiates between HTTP and HTTPS URLs when creating a security policy. Later, you can view the setting but you can change it only if the security policy contains no URLs that have the same name and use different protocols.
If the differentiate between HTTP and HTTPS URLs setting is disabled, the security policy configures URLs without specifying a specific protocol. This is useful for applications that behave the same for HTTP and HTTPS, and keeps the security policy from including the same URL twice.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
3.
Review the Differentiate between HTTP and HTTPS URLs setting.
If the Enabled check box is selected, the security policy differentiates between HTTP and HTTPS URLs. Otherwise, it does not, and creates protocol independent URLs.
4.
Click Save if you made changes, or Cancel if you made no changes.
You specify a maximum HTTP header length so that the system knows the acceptable maximum length for the HTTP header in an incoming request. The system applies the length check to header names and value. This setting is useful primarily in preventing buffer overflow attacks.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
3.
For the Configuration setting, select Advanced.
The screen refreshes, and displays additional configuration options.
4.
For the Maximum HTTP Header Length setting, select one of the options:
Any specifies that the system accepts HTTP headers of any length.
Length with a value (in bytes) specifies that the system accepts HTTP headers up to that length. The default maximum length is 2048 bytes.
5.
Click Save to save any changes you may have made to the security policy properties.
6.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
You specify a maximum cookie header length so that the system knows the acceptable maximum length for any cookie headers in the incoming HTTP request. As with the maximum HTTP header length setting, you can use this setting to help prevent primary buffer overflow attacks.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
1.
2.
For the Configuration setting, select Advanced.
The screen refreshes, and displays additional configuration options.
3.
For the Maximum Cookie Header Length setting, select one of the options:
Any specifies that the system accepts cookie headers of any length.
Length with a value (in bytes) specifies that the system accepts cookie headers up to that length. The default maximum length is 2048 bytes.
4.
Click Save to save any changes you may have made to the security policy properties.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
By default, the Application Security Manager accepts all response codes from 1xx to 3xx as valid responses. Response codes from 4xx to 5xx are considered invalid unless added to the Allowed Response Status Codes list. By default, 400, 401, 404, 407, 417, and 503 are on the list as valid HTTP response status codes.
If a response contains a response status code from 4xx to 5xx that is not on the list, the system issues the violation, Illegal HTTP status in response. If you configured the security policy to block this violation, the system blocks the response.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
3.
For the Configuration setting, select Advanced.
The screen refreshes, and displays additional configuration options.
4.
For the Allowed Response Status Codes setting, add the response status codes from 400 to 599 that you want the system to consider legal.
5.
Click Save.
6.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
If an application uses dynamic information in URLs (for example, user names), the Application Security Manager cannot use its normal functions to extract and enforce URLs or flows because the URI contains a dynamic element. If the web application that you are securing could contain dynamic information in a URL, you can enable the Dynamic Session ID in URL setting. (You only need to configure this setting if you know that your application works this way.) When the system receives a request in which the dynamic session information does not match the settings in the security policy, the system issues the Illegal session ID in URL violation.
When you enable the Dynamic Session ID in URL option on the Policy Properties screen, the Application Security Manager extracts the dynamic session information from requests or responses, based on the pattern that you configure. For requests, the system applies the pattern to the URI up to, but not including, the question mark (?) character in a query string.
Using dynamic session IDs does not change the length of the URL with regard to URL length restrictions. That is, length restrictions are based on the URL including the session ID.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
3.
For the Configuration setting, select Advanced.
The screen refreshes, and displays additional configuration options.
4.
For the Dynamic Session ID in URL option, set the option as needed:
Custom pattern: The security policy uses a user-defined regular expression to recognize a dynamic session ID in URLs. Type a regular expression in the Value field, and a description in the Description field.
Default pattern: The security policy uses the default regular expression (\/sap\([^)]+\)) for recognizing a dynamic session ID in URL.
Disabled: The security policy does not enforce dynamic session IDs in URLs. This is the default value.
5.
Click Save to save any changes you may have made to the security policy properties.
6.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
An iRule is a script that lets you customize how you manage traffic on the BIG-IP system. You can write iRules® to modify a request or response based on violations that occur. For detailed information on iRules, see the F5 Networks DevCentral web site, http://devcentral.f5.com.
If you want to use iRules to perform actions based on Application Security Manager iRule events, you must enable the Trigger ASM iRule event setting. By default, the iRule event setting is disabled. Table 3.2 lists the iRule events that iRules can subscribe to in Application Security Manager.
Occurs when Application Security Manager is generating an error response to the request that caused the violation, and gives the iRule a chance to modify the response before it is sent.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
3.
For the Configuration setting, select Advanced.
The screen refreshes, and displays additional configuration options.
4.
If you have written iRules to process application security iRule events, and assigned them to a specific virtual server, for the Trigger ASM iRule Events setting, select the Enabled check box.
5.
Click Save to save any changes you may have made to the security policy properties.
6.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
You can configure Application Security Manager to trust XFF (X-Forwarded-For) headers or customized XFF headers in requests. You may want to configure trusted XFF headers if the Application Security Manager is deployed behind an internal or other trusted proxy. Then, the system uses the IP address that initiated the connection to the proxy instead of the internal proxys IP address. This option is useful for logging, web scraping, anomaly detection, and the geolocation feature.
You should not configure trusted XFF headers if you think the HTTP header may be spoofed, or crafted, by a malicious client.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
3.
For the Configuration setting, select Advanced.
The screen refreshes, and displays additional configuration options.
4.
For the Trust XFF Header setting, select the Enabled check box.
The screen refreshes, and displays the Custom XFF Headers configuration option.
5.
If your web application uses custom XFF headers, in the Custom XFF Headers setting, add them as follows:
a)
For New Custom XFF Header, type the XFF header that the system should trust.
b)
Click Add.
Tip: You can configure up to five custom XFF headers.
6.
Click Save to save your changes.
7.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
A URI path parameter is the part of a path segment that occurs after its name. You can configure how the security policy handles path parameters that are attached to path segments in URIs. Path parameters can be ignored, treated as URLs, or treated as parameters.
The maximum number of path parameters collected in one URI path is 10. All the rest of the parameters (from the eleventh on, counting from left to right) are ignored as parameters, and are stripped from the URI as part of the normalization process.
1.
On the Main tab, expand Security, select Application Security, Security Policies, and click Active Policies.
The Active Policies screen opens.
2.
3.
For the Configuration setting, select Advanced.
The screen refreshes, and displays additional configuration options.
4.
For the Handle Path Parameters setting, select the appropriate option:
To perform no normalization or enforcement because the security policy treats the path parameter as a URL, select As URL.
5.
Click Save to save your changes.
6.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
The first security checks that Application Security Manager performs are those for RFC compliance with the HTTP protocol. The system performs validation checks on HTTP requests to ensure that the requests are formatted properly. For each security policy, you can configure which HTTP protocol checks the system performs, and what happens if requests are not HTTP compliant.
Requests that fail any of the enabled protocol checks trigger an HTTP protocol compliance failed violation. You can configure the system to generate learning suggestions, alarms, or block requests that cause the violation. The system blocks requests that are not compliant with HTTP protocol standards if the security policy enforcement mode is set to blocking, and the violation is set to block.
Note: If a request is too long and causes the Request length exceeds defined buffer size violation, the system stops validating protocol compliance for that request.
If you use automatic policy building, the system immediately enables the Learn, Alarm, and Block settings for the HTTP protocol compliance failed violation; also, the security policy immediately enables one of the HTTP protocol checks: Bad HTTP version (version 1.0 or greater is required). After the system processes sufficient traffic from different users over a period of time, it enables other appropriate HTTP protocol checks.
1.
On the Main tab, expand Security, point to Application Security, then click Blocking.
The Settings screen opens.
2.
In the RFC Violations area, click the HTTP protocol compliance failed violation link.
The HTTP subviolations are displayed.
3.
Enable or disable the HTTP protocol checks, as required. For an explanation of the individual HTTP validations, click the icon preceding each one.
4.
Click Save to retain any changes you made.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
Explicit file type
Explicit file types have a known file extension name, for example, JSP or HTML.
No extension file type
The no extension file type represents file types that do not have the typical file extension as part of the name, or an extension of more than eight characters. The slash character (/) is an example of a no_ext file type.
Wildcard file type
Wildcard file types are those whose name is, or contains, a pattern string. When you configure a wildcard file type, and enable learn explicit entities, as the security policy processes traffic, the system discovers the file types that match the wildcard. You can then decide whether to add those file types to the security policy. For detailed information on wildcard file types, refer to Configuring wildcard file types.
Disallowed file types
You can also configure a list of file types that the system always rejects. These objects are known as disallowed file types. Refer to Disallowing specific file types, for more information.
Note: File types are case-sensitive, by default. As a result, the security policy processes JPG and jpg files as separate file types. You can make security policies and all entities case insensitive when creating the policy
Note: When using automatic policy building, the system automatically creates a no_ext file type for URLs with no file extension and URLs with file extensions longer than eight characters.
For allowed file types, which are file types that the system accepts, you can configure lengths, and whether to check responses for the associated requests. Table 3.3 describes the allowed file type properties.
Explicit: Specifies a unique file type name. Type the file type name in the adjacent box.
No Extension: Specifies that the web application has a URL with no file type. The system automatically assigns this file type the name no_ext.
Wildcard: Specifies that the file type is a wildcard expression. Any file type that matches the wildcard expression is considered legal. For example, entering the wildcard [*] specifies that the security policy allows any file type. Type a wildcard expression in the adjacent box.
Specifies, when enabled, that the system places this entity in staging. Staging can be applied to both explicit and wildcard file types. If an entity is in staging, the system does not block requests for this entity even when a violation (such as file type length) occurs and the security policy is in blocking mode. The system logs learning suggestions produced by the requesting staged entities on the Learning screens.
You can review the staging status on the Allowed File Types screen. If a file type is in staging, the system displays an icon indicating status. Point to the icon to display staging information.
When the file type has been in staging for the enforcement readiness period and you are no longer getting learning suggestions, you can disable this setting.
For wildcard file types only: specifies how the system adds explicit entities that match a wildcard in the security policy. Choose the appropriate option:
Add All Entities: Creates a comprehensive whitelist policy that includes all website entities. This option produces a granular configuration and high security level, but may take more time to maintain such a policy. When the security policy is stable, the system removes the * wildcard entity from the security policy.
Never (wildcard only): Specifies that when false positives occur the system will suggest to relax the settings of the wildcard entity but does not add explicit entities to the policy. This option results in a security policy that is easy to manage. It may result in more relaxed application security, because many application objects share security settings driven from the global or wildcard level.
Specifies the maximum acceptable length, in bytes, for a URL in the context of an HTTP request containing this file type. The default is 100 bytes.
Specifies the maximum acceptable length, in bytes, for the POST data of an HTTP request that contains the file type. The default is 1000 bytes.
1.
On the Main tab, expand Security, point to Application Security and click File Types.
The Allowed File Types screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
Click the Create button.
The Add Allowed File Type screen opens.
4.
For the File Type setting, select the type, and then type a file extension or wildcard expression.
If you select No Extension, the system specifies no_ext.
Tip: For more information about wildcard file types, see Configuring wildcard file types.
6.
If you want the system to validate responses for this file type, select Enabled for the Apply Response Signatures setting.
7.
Click the Create button.
The Allowed File Types screen opens and lists the new file type.
8.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
To modify the allowed file type characteristics
1.
On the Main tab, expand Security, point to Application Security, and click File Types.
The Allowed File Types screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
From the Allowed File Types list, click the name of the file type that you want to update.
The File Type Properties screen opens.
4.
Make any changes as required, and click the Update button.
The screen refreshes, and returns to the Allowed File Types screen.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
Since web applications can change on a regular basis, you may find that the file types list contains file types that an application should not have. You can remove the file types you no longer need.
1.
On the Main tab, expand Security, point to Application Security, and click File Types.
The Allowed File Types screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
From the Allowed File Types list, select the check box to the left of the file type that you want to remove from the security policy.
4.
Click the Delete button below the list.
The system removes the file type from the configuration.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
For some web applications, you may want to deny requests for certain file types. In this case, you can create a set of disallowed file types. When the Application Security Manager receives a request whose file type is disallowed, the system ignores, learns, logs, or blocks these illegal file types according to the settings you configure for the Illegal File Type violation on the Application Security: Blocking: Settings screen.
Adding disallowed file types is useful for file types that you know should never appear on your site (such as .exe files), or for files on your site that you never want users from the outside to reach (such as .bak files).
1.
On the Main tab, expand Security, point to Application Security, and click File Types.
The Allowed File Types screen opens.
2.
On the menu bar, click Disallowed File Types.
The Disallowed File Types screen opens.
3.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
4.
Above the Disallowed File Types list, click the Create button.
The New Disallowed File Types screen opens.
5.
For the File Type (Explicit only) setting, type the file type that the security policy does not allow (for example, jpg or exe).
Note: File types are case-sensitive unless you unselected Security Policy is case sensitive when you created the policy.
6.
Click the Create button.
The screen refreshes, and displays the updated Disallowed File Types screen.
7.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
You can add three types of URLs for the web application that you are protecting:
Explicit URLs
An explicit URL has a specific name and represents one file or component of the web application, for example, /login.jsp or /sell.php.
Wildcard URLs
A wildcard URL is one whose name is or contains a pattern string, for example, *xml* or *.png. For more information on managing wildcard URLs, refer to Configuring wildcard URLs.
Disallowed URLs
A disallowed URL is a URL that is not allowed by the security policy. For information on creating disallowed URLs, refer to Specifying URLs not allowed by the security policy.
Table 3.4 lists URL properties.
Table 3.4 URL properties 
Specifies a URL definition that allows the URLs it defines. The URL definition can be for either a unique explicit file type or a wildcard definition. URLs are case-sensitive. The available types are:
Explicit: Specifies that the URL is a unique URL. Type the URL in the adjacent box.
Wildcard: Specifies a wildcard expression. Any URL that matches is considered legal. For example, typing * specifies that any URL is allowed by the security policy. Type a wildcard expression in the adjacent box.
Explicit URLs and Wildcard URLs
Explicit URLs, wildcard URLs, and disallowed URLs
Specifies, when enabled, that the system places this URL in staging. Learning suggestions produced by requesting staged URLs are logged in the Learning screens.
You can review the staging status on the URL List screen. If a URL is in staging, the system displays an icon indicating status. Point to the icon to display staging information.
When the URL has been in staging for the staging period and you are no longer getting learning suggestions, you can disable this setting.
Explicit URLs and Wildcard URLs
Specifies whether the system adds explicit entities that do not exist in the security policy but match a wildcard entity in the security policy. If you select Add All Entities:
-When Policy Builder runs, it adds explicit URLs that do not exist in the security policy but match this wildcard URL.
-The system displays, on the Enforcement Readiness Summary screen, how many entities are in staging and/or with learn explicit entities selected. Also, you can review the explicit file types by clicking on the Have Suggestions link and decide which are legitimate and accept them into the security policy by using the Traffic Learning screen.
If you choose Never (wildcard only), the system does not add URLs that match to the security policy, and suggests changing the attributes of matched wildcard entities.
Specifies, when selected, that the security policy validates the flows to the URL. If this setting is disabled, the Security Enforcer ignores the flows to the URL. For more information on flows, refer to Configuring flows. When you select this box, additional settings appear.
(Visible when Check Flows to this URL is selected.) Specifies, when selected, that this URL is a page through which a visitor can enter the web application.
(Visible when Check Flows to this URL is selected.) Specifies, when selected, that the URL is a URL from which a user can access other URLs in the web application.
Specifies, when selected, that the security policy does not block an HTTP request where the domain cookie was modified on the client side. Note that this setting is applicable only if the URL is a referrer.
Specifies, when selected, that you want to associate a navigation parameter with this URL. You must have a navigation parameter defined in the security policy to view this option.
Specifies how the system should recognize and enforce requests for this URL according to their header content. Type the request header information and click Add to create header-based content profiles.
Note: If you want the system to examine XML, JSON, or Google Web Toolkit data, you must associate this URL with an XML, JSON, or GWT profile using the Profile Name setting.
Explicit URLs and wildcard URLs
Specifies an explicit header name that must appear in requests for this URL. This field is not case-sensitive.
Explicit URLs and wildcard URLs
Specifies a simple pattern string (glob pattern matching) for the header value that must appear in legal requests for this URL (for example, *json*, xml_method?, or method[0-9]). If the header includes this pattern, the system assumes the request contains the type of data you select in the Parsed As setting. This field is case-sensitive.
Explicit URLs and wildcard URLs
Displays how the system parses requests for this URL containing headers with this specific name and value:
Apply Value Signatures: Does not parse the content; processes the entire payload with the negative security attack signatures. This option provides basic security for protocols other than HTTP, XML, JSON, or GWT.
Disallow: Blocks requests for an URL containing this header content. The system logs the Illegal Request Content Type violation.
Dont Check: Perform no checks on the request body beyond minimal checks on the entire request.
GWT: Performs checks for data in requests, based on the configuration of a GWT (Google Web Toolkit) profile associated with this URL.
HTTP: Does HTTP parsing of the request headers (default value).
JSON: Reviews JSON data using an associated JSON profile.
XML: Reviews XML data using an associated XML profile.
Explicit URLs and wildcard URLs
Specifies the XML, JSON, or GWT profile the security policy uses when examining requests for this URL if the header content is parsed as XML, JSON, or GWT. You can also create or view the XML, JSON, or GWT profile from this option.
Explicit URLs and wildcard URLs
Explicit URLs and wildcard URLs
Specifies, when enabled, that the system adds the X-Frame-Options header to the domain cookies response header. This is done to protect the web application against clickjacking. Clickjacking occurs when attacker lures a user to click illegitimate frames and iframes because the attacker hid them on legitimate visible website buttons. Therefore, enabling this option protects the web application from other web sites hiding malicious code behind them. The default is disabled.
After you enable this option, you can select whether, and under what conditions, the browser should allow this URL to be rendered in a frame or iframe.
Explicit URLs and wildcard URLs
Specifies the conditions for when the browser should allow this URL to be rendered in a frame or iframe.
Never: Specifies that this URL must never be rendered in a frame or iframe. The web application instructs browsers to hide, or disable, frame and iframe parts of this URL.
Same Origin Only: Specifies that the browser may load the frame or iframe if the referring page is from the same protocol, port, and domain as this URL. This instructs the browser to allow the user to navigate only within the same web application.
Only From URL: Specifies that the browser may load the frame or iframe from a specified domain. Type the protocol and domain in URL format - for example, htttp://www.mywebsite.com. Do not enter a sub-URL, such as http://www.mywebsite.com/index.
Explicit URLs and wildcard URLs
Explicit URLs and wildcard URLs
URL parameters are parameters that are associated with a specific URL. Extractions specify how the system discovers dynamic parameters and their properties. For full details on managing URL parameters and extractions, refer to Working with dynamic parameters and extractions.
Flows are the navigational relationships between the entities in a web application. Configuring flows may tighten the security policy, but this is an optional configuration option. For more information on flows, refer to Configuring flows.
1.
On the Main tab, expand Security, point to Application Security, and click URLs.
The Allowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
Click the Create button.
The New Allowed URL screen opens.
4.
In the Create New Allowed URL area, for the URL setting, select the type, and then type the explicit URL name in the format /index.html.
5.
From the Protocol list, select the protocol to be used to access the URL.
6.
To process requests for this URL according to the header content, create header-based content profiles. This is an advanced setting. For details, refer to Enforcing requests for URLs based on header content.
7.
To protect the application from being able to harbor illegitimate frames and iframes with malicious code in the application, for Clickjacking Protection, select the Enabled check box.
9.
Click the Create button.
The screen refreshes, and you can see the new URL in the list.
10.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
To display URLs visually, you can display a tree view of the security policy that shows the explicit URLs with any associated parameters. For more information on the tree view, refer to Displaying security policies in a tree view.
1.
On the Main tab, expand Security, point to Application Security, and click URLs.
The Allowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
4.
Click the Delete button.
A confirmation popup screen opens, where you confirm the deletion of the URL.
5.
Click OK.
The system removes the URL from the security policy.
6.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
1.
On the Main tab, expand Security, point to Application Security, and click URLs.
The Allowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
In the Allowed URLs List area, click the name of a URL.
The Allowed URL Properties screen opens, where you can view or modify the URL properties.
Tip: If the URL name is in gold letters, the URL is a referrer. Referrers call other URLs within the web application. See Identifying referrer URLs, following, for more information.
In lists of URLs, non-referrer URLs appear in blue and referrer URLs appear in gold. Referrer URLs are web pages that can request other URLs. For example, an HTML page can request a GIF, JPG, or PNG image file. The HTML page is the referrer, and the GIF, JPG, and PNG files are non-referrers.
A referrer in Application Security Manager is similar to the HTTP Referer header. If you need to configure referrers, use them for complex objects, such as HTML pages, but not for embedded objects, such as GIF files.
You can create a list of disallowed URLs, for example, to disallow access to an administrative interface to the web application by disallowing /admin/config.php. Disallowed URLs are explicit URLs and not wildcards.
If a requested URL is on the disallowed URLs list, the system ignores, learns, logs, or blocks these illegal URLs according to the settings you configure for the Illegal URL violation on the Application Security: Blocking: Settings screen. You can view learning suggestions for disallowed URLs on the Illegal URL learning screen. For more information, refer to Working with learning suggestions.
1.
On the Main tab, expand Security, point to Application Security, URLs. and then click Disallowed URLs.
The Disallowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
3.
Click the Create button.
The New Disallowed URL screen opens.
4.
For Protocol, select HTTP or HTTPS.
5.
For URL, type the name of the URL you want the security policy to consider illegal in the format /index.html.
6.
Click the Create button.
7.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
When you create a new allowed URL, the system reviews requests for the URL using HTTP parsing. The system automatically creates a default header-based content profile for HTTP, and you cannot delete it. However, requests for an URL may contain other types of content, such as JSON, XML, or other proprietary formats. You can use header-based content profiles to configure how the system recognizes and enforces requests for this URL according to the header content in the request.
If the system detects a request for a URL, which contains header content that is disallowed in the URL's Header-Based Content Profile, the Illegal request content type violation occurs.
You can also use header-based content profiles to block traffic based on the type of header and header value in requests for a URL.
Note: The following procedure is for adding header-based content profiles to a URL that already exists in the configuration. If the URL does not yet exist, refer to Creating an explicit URL, or Creating wildcard URLs, before proceeding.
1.
On the Main tab, expand Security, point to Application Security, and click URLs.
The Allowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
In the Allowed URLs List area, click the name of the URL to which you want to add a header-based content profile.
The Allowed URL Properties screen opens where you can modify the properties of the URL.
4.
Above the Allowed URL Properties area, select Advanced.
The screen displays additional options.
5.
For the Header-Based Content Profiles setting, specify the header and value as follows:
a)
In the Request Header Name field, type the explicit header name that must appear in requests for this URL. This field is not case-sensitive.
b)
In the Request Header Value field, type the pattern string for the header value to find in requests for this URL, for example, *json*, xml_method?, or method[0-9]. This field is case-sensitive.
c)
From the Parsed As list, specify how the system should enforce URL requests that match the header name and value.
Apply Value Signatures
Does not parse the content; processes the entire payload using the negative security attack signatures. This option provides basic security for protocols other than HTTP, XML, JSON, and GWT; for example, use *amf* as the header value for a content-type of Action Message Format.
Blocks requests for an URL containing this header content. The system logs the Illegal Request Content Type violation.
Performs no checks on the request body beyond minimal checks on the entire request.
d)
If the content is GWT, JSON, or XML, select an existing profile or click the create (+) button to create one.
e)
Click Add.
6.
To protect the application from being able to harbor illegitimate frames and iframes with malicious code in the application, for Clickjacking Protection, select the Enabled check box.
7.
Click the Update button.
The screen displays the Allowed URLs screen.
8.
Click the Apply Policy button (in the editing context area) to immediately put those changes into effect.
When you use the Deployment wizard to create a security policy, you select a language encoding (or let the system determine it automatically). The system enforces the character set of the language encoding in the URL element in URIs, and also for any wildcard URLs in the security policy. For example, by disallowing the characters <, >, ', and |, Application Security Manager can protect against many cross-site scripting attacks and injection attacks. You can modify which characters are enforced in the URL character set.
1.
On the Main tab, expand Security, point to Application Security, and click URLs.
The Allowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
On the menu bar, click Character Set.
The URLs Character Set screen opens, where you can view the character set, and state of each character.
4.
Use the View option to display the characters that you want to see.
5.
To modify the character set, click Allow or Disallow to define which characters the system should permit or prohibit in the name of a wildcard URL.
6.
Click Save to save your changes.
7.
Click the Apply Policy button (in the editing context area) to immediately put those changes into effect.
The application flow defines the access path leading from one URL to another URL within the web application. For example, a basic web page may include a graphic and a hyperlink to another page in the application. The calls to these other entities from the basic page make up the flow.
Note: Configuring flows is an optional task. Unless you need the enhanced security of configured flows, F5 Networks recommends that you do not configure flow-based security policies due to their complexity.
1.
On the Main tab, expand Security, point to Application Security, and click URLs.
The Allowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
In the Allowed URLs List area, click the name of the URL for which you want to see the flow.
The Allowed URL Properties screen opens.
4.
On the menu bar, click Flows to URL.
The Flows to URL screen opens.
5.
Above the Flows to URL area, click the Create button.
The Create a New Flow popup screen opens.
6.
In the Referrer URL field, select one of the following:
Entry Point: Specifies that the client can enter the application from this URL
URL Path: Specifies the path of the referrer URL which refers to other URLs in the web application (for example, /index.html).
7.
From the Protocol list, select the appropriate protocol.
8.
From the Method list, select the HTTP method that the URL expects a visitor to use to access the authenticated URL, for example, GET or POST.
9.
In the Frame Target field, type the index (0-29, or 99) of the HTML frame in which the URL belongs, if the web application uses frames.
Tip: If your web application does not use frames, type the value 1.
12.
Click OK.
The popup screen closes, and on the Flows to URL screen, you see the URLs from which the authenticated URL can be accessed.
Tip: Click a URL in the Flows list to open the Flow Properties screen where you can view or modify the flows properties.
13.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
1.
On the Main tab, expand Security, point to Application Security, URLs and click Flows List.
The Flows List screen opens.
When you view the flows for a particular URL, the system displays the flow to the particular URL. Note that flows may be associated with explicit URLs only.
1.
On the Main tab, expand Security, point to Application Security, and click URLs.
The Allowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
In the Allowed URLs List area, click the name of the URL for which you want to see the flow.
The Allowed URL Properties screen opens.
4.
On the menu bar, click Flows to URL.
The Flows to URL screen opens.
Some web applications contain URLs with dynamic names, for example, the links to a server location for file downloads, where the file name may be unique to each user. You can configure the system to detect these URLs by configuring a dynamic flow.
For a dynamic flow, you configure a regular expression that describes the dynamic name, and associate the flow with the URL. The Application Security Manager then extracts the dynamic URL names from URL responses, for which the dynamic flow is configured.
1.
On the Main tab, expand Security, point to Application Security, and click URLs.
The Allowed URLs screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
In the Allowed URLs List area, click the name of the URL for which you want to see the flow.
The Allowed URL Properties screen opens.
4.
On the menu bar, click Dynamic Flows from URL.
The Dynamic Flows from URL screen opens.
5.
Click the Create button.
The Create New Dynamic Flow popup screen opens.
6.
In the Prefix field, type a fixed substring that appears near the top of the HTML source page before the dynamic URL. It may be a name of a section in combination with HTML tags, for example, <h3>Flows2URL</h3>.
7.
For the RegExpValue setting, type a regular expression that specifies the set of URLs that make up the dynamic flow, for example, a set of archive files.
8.
For the Suffix setting, type a fixed string that occurs in the referring URLs source code, and is physically located after the reference to the dynamic name URL.
9.
Click the OK button.
The popup screen closes, and on the Dynamic Flows from URL screen, you see the dynamic flow extraction properties.
10.
To put the security policy changes into effect immediately, on the Main tab, expand Security, point to Application Security, and click URLs, and then click the Apply Policy button in the editing context area.
Your web application may contain URLs that should be accessed only through other URLs. For example, in an online banking application, account holders should be able to access their account information only by logging on through a login screen first. In your security policy, you can create login URLs to limit access to authenticated URLs. A login page is a URL in a web application that requests must pass through to get to the authenticated URLs. Use login pages, for example, to prevent forceful browsing of restricted parts of the web application, by defining access permissions for users. Login pages also allow session tracking of user sessions.
You can specify one or more login URLs for a web application. If a user tries to bypass the login URLs, the system issues the Login URL bypassed violation. You can also configure login page settings that apply to all login URLs including the expiration time, authenticated URLs, and logout URLs. If a user session is idle and exceeds the expiration time, the system issues the Login URL expired violation, and the user can no longer reach the authenticated URLs. You can use login URLs to enforce idle time-outs on applications that are missing this functionality.
For both login violations, if the enforcement mode is blocking, the system sends the Login Page Response to the client. For information on response pages, see Configuring the response pages.
1.
On the Main tab, expand Security, point to Application Security, and click Sessions and Logins.
The Login Pages List screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
Click the Create button.
The New Login Page screen opens.
4.
For the Login URL setting, select Explicit or Wildcard, select the appropriate protocol, and then type the URL that users must pass through to access the target URL. Type the URL in the format /login for an explicit URL or /login* for a wildcard URL.
5.
For Authentication Type, specify the method the web server uses to authenticate the login URL against user credentials.
The web server does not authenticate users trying to access the web application through the login URL. This is the default setting.
The web application uses a form to collect and authenticate user credentials. If using this option, you also need to type the Username Parameter Name and Password Parameter Name written in the code of the HTML form.
When a request includes the user name or password, the system recognizes that request as a login attempt.
HTTP Basic Authentication
The user name and password are transmitted in Base64 and stored on the server in plain text.
HTTP Digest Authentication
The web server performs the authentication; user names and passwords are not transmitted over the network, nor are they stored in plain text.
6.
In the Access Validation area, define at least one of the following validation criteria for the login URL response:
A string that should appear in the response
Type a string that must appear in the response for the system to detect a successful login attempt; for example, Successful Login.
A string that should NOT appear in the response
Type a string that indicates a failed login attempt; for example, Authentication failed.
Expected HTTP response status code
Type an HTTP response code that is sent when the user successfully logs in; for example, 200.
Expected validation header name and value (for example, Location header)
Type a header name and value that is sent when the user successfully logs in.
Expected validation domain cookie name
Type a defined domain cookie name that is sent when the user successfully logs in.
Expected parameter name (added to URI links in the response)
Type a parameter that is sent when the user successfully logs in.
Note that if you configure more than one validation criteria, all criteria must be met to access the login URL.
7.
Click the Create button to add the login URL to the security policy.
The new login URL appears in the Login URLs area.
9.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
1.
On the Main tab, expand Security, point to Application Security, Sessions and Logins, then click Login Enforcement.
2.
If you want the login URL to be valid for only a certain length of time, set Expiration Time to Enabled, and type a value, in seconds.
a)
For the Authenticated URLs setting, type the URL in the format /logon.html (wildcards are allowed).
b)
a)
For the Logout URLs setting, type the URL in the format /logoff.html (explicit URLs only).
b)
5.
Click the Save button.
Depending on the web application, a response may contain sensitive user information, such as credit card numbers or social security numbers (U.S. only). The Data Guard feature can prevent responses from exposing sensitive information by masking the data (also known as response scrubbing).
Note: When you enable the Mask Data option, the system replaces the sensitive data with asterisks (****). F5 Networks recommends that you enable this setting if the security policy enforcement mode is transparent. Otherwise, when the system returns a response, sensitive data could be exposed to the client.
Using Data Guard, you can configure custom patterns using PCRE regular expressions to protect other forms of sensitive information, and indicate exception patterns not to consider sensitive. You can also specify which URLs you want the system to examine for sensitive data.
The system can examine the content of responses for specific types of files that you do not want to be returned to users, such as ELF binary files or Microsoft® Word documents. File content checking causes the system to examine responses for the file content types you select and block sensitive file content depending on the blocking modes, but does not mask the sensitive file content.
When you have enabled the Data Guard feature, and the system detects sensitive information in a response, the system generates the Data Guard: Information leakage detected violation. If the security policy enforcement mode is set to blocking, the system does not send the response to the client.
You can configure one additional user-defined response content-type using the system variable user_defined_accum_type. If response logging is enabled, these responses can also be logged.
1.
On the Main tab, expand Security, point to Application Security, and click Data Guard.
The Data Guard screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
Enable the Data Guard check box.
4.
If you want the system to consider credit card numbers as sensitive data, enable the Credit Card Numbers check box.
5.
If you want the system to consider U.S. social security numbers (in the form nnn-nn-nnnn, where n is an integer) as sensitive data, enable the U.S. Social Security Numbers check box.
6.
Use the Custom Patterns setting to specify additional patterns for sensitive data:
a)
Enable the Custom Patterns check box.
b)
In the New Pattern field, type a PCRE regular expression to specify the sensitive data pattern, then click Add.
7.
Use the Exception Patterns setting to specify patterns in the data not to be considered sensitive:
a)
Enable the Exception Patterns check box.
b)
In the New Pattern field, type a PCRE regular expression to specify the pattern that you do not want to be considered sensitive (for example, 999-[/d][/d]-[/d][/d][/d][/d]), then click Add.
8.
If, in the response, you want the system to replace the sensitive data with asterisks (****), enable the Mask Data check box.
9.
To review responses for specific file content (for example, to determine whether someone is trying to download a sensitive type of document), perform these steps:
a)
For the File Content Detection setting, select the Check File Content check box.
The screen refreshes and displays additional settings.
b)
Move the file types you want the system to consider sensitive from the Available list into the Members list.
10.
Use the Enforcement Mode setting to specify which URLs to examine for sensitive data:
To inspect all URLs, use the default value of Ignore URLs in list, and do not add any URLs to the list.
To inspect all URLs except a few specific URLs, use the default value of Ignore URLs in list, and add the exceptions to the list.
To inspect only specific URLs, select Enforce URLs in list, and add the URLs to check to the list.
11.
Click the Save button to retain any changes you made.
12.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
1.
On the Main tab, expand Security, point to Application Security, and click Data Guard.
The Data Guard screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
Clear the Data Guard check box.
4.
Click the Save button to save your change.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
If users can access a web application using multiple host names or IP addresses, you can add them to the security policy that protects the application. The system uses this list of host names as follows:
The Policy Builder considers the host names in the list to be legitimate internal links and forms, and learns security policy entities from them, and also from relative URLs that do not contain a domain name.
The CSRF feature uses the list to distinguish between internal and external links and forms, and the system inserts the CSRF token only into internal links and forms.
The Application Security Manager identifies web application related host names as fully qualified domain name (FQDNs) in requests or responses. If you enable the Include Sub-domains setting, the system matches all sub-domains when evaluating FQDNs, and inserts ASM cookies into responses from the sub-domains of the host name. When an application uses several sub-domains, all ASM cookie-based features (like CSRF protection, Login Pages, and Dynamic Sessions ID in URL) require ASM cookies to be inserted from the correct domain.
Note: The Policy Builder can automatically add domain names to the Host Name list if you select the Host Names check box in the Automatic Policy Building Settings area of the Settings screen.
1.
On the Main tab, expand Security, point to Application Security, point to Headers. and click Host Names.
The Host Names screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
Above the list of host names, click the Create button.
The New Host Name screen opens.
4.
In the Host Name field, type the host name that is used to access the web application (either a domain name or an IP address).
5.
To include all sub-domains of the specified host name, for the Include Sub-domains setting select the Enabled check box.
6.
Click the Create button.
7.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
All security policies accept standard HTTP methods by default. The default allowed methods are GET, HEAD, and POST. The system treats any incoming HTTP request that uses an HTTP method other than the allowed methods as an invalid request. If your web application uses HTTP methods other than the default allowed methods, you can add them to the security policy.
1.
On the Main tab, expand Security, point to Application Security, point to Headers and click Methods.
The Methods screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
Click the Create button.
The New Allowed Method screen opens.
4.
For the Method setting, choose one of the following actions:
Click the Predefined setting, then select the system-supplied method that to add to the allowed methods list.
Click Custom, and then in the Custom Method field type the name of a method. Use this option if the method you want to allow is not in the system-supplied list.
5.
If using flows in the security policy, select Advanced next to Allowed Method Properties, then from the Act as Method list, select one of the following options:
GET: Specifies that the request does not contain any HTTP data following the HTTP header section.
POST: Specifies that the request contains HTTP data following the HTTP header section.
6.
Click the Create button.
The screen refreshes, and on the Methods screen, you can see the additional allowed method in the list.
7.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
In addition to creating allowed methods, on the Methods screen, you can also edit or delete allowed methods (except GET, POST, or HEAD), as required by changes in the web application. You cannot edit or delete allowed methods provided by the system.
To display the Methods screen, expand Security, point to Application Security, Headers, then click Methods.
Blocking policy
The blocking policy specifies the blocking actions for each of the security policy violations. The blocking policy also specifies the enforcement mode for the security policy. For more information, see Configuring policy blocking.
Evasion techniques
Sophisticated hackers have figured out coding methods that normal attack signatures do not detect. These methods are known as evasion techniques. Application Security Manager can detect the evasion techniques, and you can configure blocking properties for them. For more information, see Configuring blocking properties for evasion techniques.
HTTP Protocol Compliance
The system performs validation checks on HTTP requests to ensure that the requests are formatted properly. You can configure which validation checks are enforced by the security policy. For more information, see Validating HTTP protocol compliance.
Web Services Security
You can configure which web services security errors must occur for the system to learn, log, or block requests that trigger the errors. For information on how to configure web services security errors, see Configuring blocking properties for web services security.
Response pages
When the enforcement mode is blocking, and a request (or response) triggers a violation for which the Block action is enabled, the system returns the response page to the client. If you configure login pages, you can also configure a response page for blocked access. For more information, see Configuring the response pages.
On the Blocking: Settings screen, you configure the enforcement mode for the security policy, and the blocking actions for all of the violations.
Click the information icon () preceding a violation, or refer to Appendix A, Security Policy Violations, for descriptions of the violations. For information on setting the learning, alarm, and blocking actions for the violations, see Configuring the blocking actions.
The security policy has two enforcement modes: transparent and blocking. In transparent mode, the system allows requests to reach the web application even if the request violates some aspect of the security policy. In blocking mode, the system does not allow requests that violate the security policy to reach the web application, and instead returns the blocking response page to the client. Note that the system blocks requests only for those violations with enabled Block flags. See Configuring the blocking actions, for more information on the Block flag.
Tip: You can set the enforcement mode from either the Security Policies > Properties screen or the Blocking: Settings screen.
1.
On the Main tab, expand Security, point to Application Security, and click Blocking.
The Blocking: Settings screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one that you want to update.
3.
In the Violations List area, for the Enforcement Mode setting, select either Transparent or Blocking.
4.
Click the Save button to save your changes.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
On the Application Security: Blocking: Settings screen, you can enable or disable the Learn, Alarm, and Block flags, or blocking actions, for each violation. The blocking actions (along with the enforcement mode) determine how the system processes requests that trigger the corresponding violation. Entities in staging and wildcards set to add all entities do not cause violations, and consequently are not blocked.
Learn
When the Learn flag is enabled for a violation, and a request triggers the violation, the system logs the request and generates learning suggestions. The system takes this action when the security policy is in either the transparent or blocking enforcement mode.
Alarm
When the Alarm flag is enabled for a violation, and a request triggers the violation, the system logs the request, and also logs a security event. The system takes this action when the security policy is in either the transparent or blocking enforcement mode.
Block
The Block flag blocks traffic when (1) the security policy is in the blocking enforcement mode, (2) a violation occurs, (3) the Block flag is enabled for the violation, and (4) the entity is enforced. The system sends the blocking response page (containing a Support ID to identify the request) to the client.
1.
On the Main tab, expand Security, point to Application Security, Blocking, and click Settings.
The Blocking: Settings screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
3.
Review each violation and adjust the Learn, Alarm, and Block flags as required.
Note: The Block flags are available only when the enforcement mode of the security policy is set to Blocking.
4.
Click Save to save any changes you may have made on this screen.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
For every HTTP request, Application Security Manager examines the request for evasion techniques, which are coding methods used by attackers designed to avoid detection by attack signatures. You can enable or disable the blocking properties for evasion techniques.
1.
On the Main tab, expand Security, point to Application Security, Blocking, then click Evasion Techniques.
The Evasion Techniques screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
3.
Tip: Click the information icon () for descriptions of the evasion techniques.
4.
Click the Save button to retain any changes you may have made.
5.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
You can configure which HTTP protocol compliance checks the security policy validates and enforces. If a request fails any of the enabled HTTP protocol compliance checks, the system responds according to the Learn/Alarm/Block settings of the HTTP protocol compliance failed violation on the Application Security: Blocking: Settings screen. For information on configuring the compliance checks, see Validating HTTP protocol compliance.
You can configure which web services security errors must occur for the system to learn, log, or block requests that trigger the errors. These errors are sub-violations of the parent violation, Web Services Security failure, configured on the Blocking Settings screen, as described in Configuring policy blocking.
If a request causes one of the enabled errors to occur, web services security stops parsing the document. How the system reacts depends on how you configured the blocking settings for the Web Services Security failure violation:
If configured to Learn or Alarm when the violation occurs, the system does not encrypt or decrypt the SOAP message, and sends the original document to the web service.
If configured to Block when the violation occurs, the system blocks the traffic and prevents the document from reaching its intended destination.
1.
On the Main tab, expand Security, point to Application Security, Blocking, then click Web Services Security.
The Web Services Security errors screen opens.
1.
On the Main tab, expand Security, point to Application Security, Blocking, and click Settings.
The Blocking Settings screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
3.
In the Input Violations area, click the Web Services Security Failure violation link.
The web services subviolations are displayed.
4.
Enable or disable the web services subviolations, as required. For an explanation of the individual failures, click the icon preceding each one.
5.
Click the Save button to retain your changes.
6.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
The Application Security Manager has a default blocking response page that it returns to the client when the client request, or the web server response, is blocked by the security policy. The system also has a login response page for login violations.
All default response pages contain a variable, <%TS.request.ID()%>, that the system replaces with a support ID number when it issues the page. Customers can use the support ID to identify the request when making inquiries.
The system uses default pages in response to a blocked request or blocked login. If the default pages are acceptable, you do not need to change them and they work automatically. However, if you want to include XML or AJAX blocking responses, you need to enable the blocking behavior first:
1.
On the Main tab, expand Security, point to Application Security, Blocking, and click Response Pages.
The Response Page screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
3.
For the Response Type setting, select one of the following options:
Default Response: Specifies that the system returns the system-supplied response page in HTML. No further configuration is needed.
Custom Response: Specifies that the system returns a response page with HTML code that you define.
Redirect URL: Specifies that the system redirects the user to a specific web page.
SOAP Fault: Specifies that the system returns the system-supplied blocking response page in XML format. You cannot edit the text.
Note: The settings on the screen change depending on the selection that you make for the Response Type setting.
4.
If you selected the Custom Response option in step 3, you can either modify the default text or upload an HTML file.
a)
For the Response Headers setting, type the response header you want the system to send.
b)
For the Response Body setting, type the text you want to send to a client in response to an illegal blocked request. Use standard HTTP syntax.
Tip: Click Show to see what the response will look like.
a)
For the Upload File setting, specify an HTML file.
b)
Click Upload to upload the file into the response body.
5.
If you selected the Redirect URL option in step 3, then in the Redirect URL field, type the URL to which the system redirects the user, for example, http://www.myredirectpage.com. The URL should be for a page that is not within the web application itself.
To redirect the blocking page to a URL with a support ID in the query string, type the URL and the support ID in the following format:
The system replaces <%TS.request.ID%> with the relevant support ID so that the blocked request is redirected to the URL with the relevant support ID.
6.
Click Save.
7.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
You can configure the login page response that the system sends if the user does not meet the preconditions when requesting the target URL of a configured login page.
1.
On the Main tab, expand Security, point to Application Security, Blocking, and click Response Pages.
The Default Response Page screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
3.
Click the Login Page Response tab.
4.
For the Response Type setting, select one of the following options:
Default Response: Specifies that the system returns the system-supplied response page in HTML. No further configuration is needed.
Custom Response: Specifies that the system returns a response page with HTML code that you define.
Redirect URL: Specifies that the system redirects the user to a specific web page if the login fails.
SOAP Fault: Specifies that the system returns the system-supplied blocking response page in XML format. You cannot edit the text.
Note: The settings on the screen change depending on the selection that you make for the Response Type setting.
5.
If you selected the Custom Response option in step 4, you can either modify the default text or upload an HTML file.
a)
For the Response Header setting, type the response header you want the system to send.
b)
For the Response Body setting, type the text you want to send to a client in response to an illegal blocked request. Use standard HTTP syntax.
Tip: Click Show to see what the response will look like.
a)
For the Upload File setting, specify an HTML file.
b)
Click Upload to upload the file into the response body.
6.
If you selected the Redirect URL option in step 4, then in the Redirect URL field, type the URL to which the system redirects the user, for example, http://www.myredirectpage.com. The URL should be for a page that is not within the web application itself.
To redirect the blocking page to a URL with a support ID in the query string, type the URL and the support ID in the following format:
The system replaces <%TS.request.ID%> with the relevant support ID so that the blocked request is redirected to the URL with the relevant support ID.
7.
Click Save.
8.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
You can configure the blocking response that the system sends to the user when the security policy blocks a client request that contains XML content, which does not comply with the settings of an XML profile in the security policy.
If you want to use the default SOAP response (SOAP Fault), you only need to enable XML blocking on the profile.
1.
On the Main tab, expand Security, point to Application Security, Blocking, and click Response Pages.
The Default Response Page screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
3.
Click the XML Response Page tab.
4.
For the Response Type setting, select Custom Response.
5.
For the Response Header setting, type the response header you want the system to send.
6.
For the Response Body setting, type the text you want to send to a client in response to an illegal blocked request. Use XML syntax.
To upload a file containing the XML response: specify an XML file and click Upload to upload the file into the response body.
Tip: Click Show to see what the response will look like.
7.
Click Save.
8.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
1.
On the Main tab, expand Security, point to Application Security, Content Profiles, and click XML Profiles.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
4.
For the Use XML Blocking Response Page setting, select the Enabled check box.
5.
Click Update.
6.
To put the security policy changes into effect immediately, click the Apply Policy button in the editing context area.
For details on setting up AJAX response pages, refer to the BIG-IP® Application Security Manager: Implementations manual.
Cross-site request forgery (CSRF) is an attack where a user is forced to execute unauthorized actions (such as a bank transfer) within a web application where the user is currently authenticated.
You can configure a security policy to protect against CSRF attacks, including specifying which URLS you want the system to examine. If the system detects a CSRF attack, it issues a CSRF attack detected violation. The system inserts an Application Security Manager token to prevent CSRF attacks. To prevent token hijacking, the system reviews the token expiration date. If the token is expired, the system issues the CSRF authentication expired violation.
If you want to block requests suspected of being CSRF attacks, you need to enable CSRF protection, and set the security policy enforcement mode to Blocking. Also, one or both of the CSRF violations must have the Block flag enabled (on the Application Security: Blocking: Settings screen), as shown in Figure 3.2. Though these violations are set to block by default, CSRF protection must be enabled for this feature to work.
1.
On the Main tab, expand Security, point to Application Security, then click CSRF Protection.
The CSRF Protection screen opens.
2.
In the editing context area, ensure that the Current edited policy is the one you want to update.
3.
For the CSRF Protection setting, select the Enabled check box.
To protect only SSL requests in the secured part of the application, for the SSL Only setting, select the Enabled check box.
To protect the entire web application, leave the Enabled check box for the SSL Only setting cleared.
a)
For Expiration Time, select Enabled.
b)
In the field, type the amount of time, in seconds (1 to 99999), after which the cookie should expire. The default is 600 seconds.
6.
For URLs List, specify the URLs you want the system to examine. (The system considers all other URLs safe.)
Tip: You can also use wildcards for URLs; for example /myaccount/*.html, /*/index.php, or /index.?html.
b)
Click Add.
7.
Click Save.
8.
To put CSRF protection into effect immediately, click the Apply Policy button in the editing context area.
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)