Applies To:

Show Versions Show Versions

Manual Chapter: Redundant System Configuration
Manual Chapter
Table of Contents   |   << Previous Chapter

Redundant System Configuration

About DSC configuration on a VIPRION system

The way you configure device service clustering (DSC®) (also known as redundancy) on a VIPRION® system varies depending on whether the system is provisioned to run the vCMP® feature.

Important: When configuring redundancy, always configure network, as opposed to serial, failover. Serial failover is not supported for VIPRION® systems.

DSC configuration for non-vCMP systems

For a device group that consists of VIPRION® systems that are not licensed and provisioned for vCMP®, each VIPRION cluster constitutes an individual device group member. The following table describes the IP addresses that you must specify when configuring redundancy.

Table 1. Required IP addresses for DSC configuration on a non-vCMP system
Feature IP addresses required
Device trust The primary floating management IP address for the VIPRION cluster.
ConfigSync The unicast non-floating self IP address assigned to VLAN internal.
Failover
  • Recommended: The unicast non-floating self IP address that you assigned to an internal VLAN (preferably VLAN HA), as well as a multicast address.
  • Alternative: All unicast management IP addresses that correspond to the slots in the VIPRION cluster.
Connection mirroring For the primary address, the non-floating self IP address that you assigned to VLAN HA. The secondary address is not required, but you can specify any non-floating self IP address for an internal VLAN..
Important: When configuring redundancy, always configure network, as opposed to serial, failover. Serial failover is not supported for VIPRION® systems.

DSC configuration for vCMP systems

On a vCMP® system, the devices in a device group are virtual devices, known as vCMP guests. You configure device trust, config sync, failover, and mirroring to occur between equivalent vCMP guests in separate chassis.

For example, if you have a pair of VIPRION® systems running vCMP, and each system has three vCMP guests, you can create a separate device group for each pair of equivalent guests. This table shows an example.

Table 2. Sample device groups for two VIPRION systems with vCMP
Device groups for vCMP Device group members
Device-Group-A
  • Guest1 on chassis1
  • Guest1 on chassis2
Device-Group-B
  • Guest2 on chassis1
  • Guest2 on chassis2
Device-Group-C
  • Guest3 on chassis1
  • Guest3 on chassis2

By isolating guests into separate device groups, you ensure that each guest synchronizes and fails over to its equivalent guest. The next table describes the IP addresses that you must specify when configuring redundancy.

Table 3. Required IP addresses for DSC configuration on a VIPRION system with vCMP
Feature IP addresses required
Device trust The cluster management IP address of the guest.
ConfigSync The non-floating self IP address on the guest that is associated with VLAN internal on the host.
Failover
  • Recommended: The unicast non-floating self IP address on the guest that is associated with an internal VLAN on the host (preferably VLAN HA), as well as a multicast address.
  • Alternative: The unicast management IP addresses for all slots configured for the guest.
Connection mirroring For the primary address, the non-floating self IP address on the guest that is associated with VLAN internal on the host. The secondary address is not required, but you can specify any non-floating self IP address on the guest that is associated with an internal VLAN on the host.
Important: When configuring redundancy, always configure network, as opposed to serial, failover. Serial failover is not supported for VIPRION® systems.

About DSC configuration for systems with APM

When you configure a VIPRION® system (or a VIPRION system provisioned for vCMP®) to be a member of a Sync-Failover device group, you can specify the minimum number of cluster members (physical or virtual) that must be available to prevent failover. If the number of available cluster members falls below the specified value, the chassis or vCMP guest fails over to another device group member.

When one of the BIG-IP® modules provisioned on your VIPRION® system or guest is Application Policy Manager ®(APM®), you have a special consideration. The BIG-IP system automatically mirrors all APM session data to the designated next-active device instead of to an active member of the same VIPRION or vCMP cluster. As a result, unexpected behavior might occur if one or more cluster members becomes unavailable.

To prevent unexpected behavior, you should always configure the chassis or guest so that the minimum number of available cluster members required to prevent failover equals the total number of defined cluster members. For example, if the cluster is configured to contain a total of four cluster members, you should specify the Minimum Up Members value to be 4, signifying that if fewer than all four cluster members are available, failover should occur. In this way, if even one cluster member becomes unavailable, the system or guest will fail over to the next-active mirrored peer device, with full cluster member availability.

About connection mirroring

Connection mirroring ensures that if a blade, or a cluster within a device service clustering (redundant system) configuration, becomes unavailable, the system can still process existing connections. You can choose between two types of mirroring to configure for a VIPRION® system:

Intra-cluster mirroring
The VIPRION system mirrors the connections and session persistence records within the cluster, that is, between the blades in the cluster. You can configure intra-cluster mirroring on both single devices and redundant configurations. It is important to note that F5 Networks® does not support intra-cluster mirroring for Layer 7 (non-FastL4) virtual servers.
Inter-cluster mirroring
The VIPRION system mirrors the connections and session persistence records to another cluster in a redundant configuration. You can configure inter-cluster mirroring on a redundant system configuration only, and only on identical hardware platforms. Moreover, on a VIPRION® system running the vCMP® feature, the two guests as mirrored peers must each reside on a separate chassis, with the same number of slots, on the same slot numbers, and with the same number of cores allocated per slot.
Note: Inter-cluster connection mirroring for CMP-disabled virtual servers is not supported.

Intra-cluster mirroring and inter-cluster mirroring are mutually exclusive. Note that although connection mirroring enhances the reliability of your system, it might affect system performance.

Configuring connection mirroring within a cluster

Using the BIG-IP® Configuration utility, you can configure intra-cluster connection mirroring for a VIPRION® cluster. When you configure intra-cluster mirroring, the system mirrors connections among cluster members within a single chassis.
Important: Intra-cluster mirroring supports mirroring for FastL4 connections only.
  1. From a browser window, log in to the BIG-IP Configuration utility, using the cluster IP address.
  2. On the Main tab, click Device Management > Devices .
    The Devices screen opens.
  3. In the Device list, in the Name column, click the name of the device you want to configure.
  4. From the Device Connectivity menu, choose Mirroring.
  5. From the Network Mirroring list, select Within Cluster.
  6. Click Update.

Configuring connection mirroring between clusters

Using the BIG-IP® Configuration utility, you can configure inter-cluster connection mirroring for a VIPRION® cluster. When you configure inter-cluster mirroring, the system mirrors connections between two separate clusters, one per chassis.
Important: Connection mirroring only functions between devices with identical hardware platforms. Moreover, on a VIPRION® system running the vCMP® feature, the two guests as mirrored peers must each reside on a separate chassis, with the same number of slots, on the same slot numbers, and with the same number of cores per slot allocated.
  1. From a browser window, log in to the BIG-IP Configuration utility, using the cluster IP address.
  2. On the Main tab, click Device Management > Devices .
    The Devices screen opens.
  3. In the Device list, in the Name column, click the name of the device you want to configure.
  4. From the Device Connectivity menu, choose Mirroring.
  5. From the Network Mirroring list, select Between Clusters.
  6. Click Update.
Table of Contents   |   << Previous Chapter

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)