A common TMOS device service clustering (DSC) implementation is an active-standby configuration, where a single traffic group is active on one of the devices in the device group, and is in a standby state on a peer device. Alternatively however, you can create a second traffic group and activate that traffic group on a peer device. In this active-active configuration, the devices each process traffic for a different application simultaneously. If one of the devices in the device group goes offline, the traffic group that was active on that device fails over to a peer device. The result is that two traffic groups can become active on one device.
To implement this DSC implementation, you create a Sync-Failover device group. A Sync-Failover device group with two or more members provides configuration synchronization and device failover, and optionally, connection mirroring.
The way you configure device service clustering (DSC) (also known as redundancy) on a VIPRION system varies depending on whether the system is provisioned to run the vCMP feature.
For a device group that consists of VIPRION systems that are not licensed and provisioned for vCMP, each VIPRION cluster constitutes an individual device group member. The following table describes the IP addresses that you must specify when configuring redundancy.
|Feature||IP addresses required|
|Device trust||The primary floating management IP address for the VIPRION cluster.|
|ConfigSync||The unicast non-floating self IP address assigned to VLAN internal.|
|Connection mirroring||For the primary address, the non-floating self IP address that you assigned to VLAN HA. The secondary address is not required, but you can specify any non-floating self IP address for an internal VLAN..|
On a vCMP system, the devices in a device group are virtual devices, known as vCMP guests. You configure device trust, config sync, failover, and mirroring to occur between equivalent vCMP guests in separate chassis.
For example, if you have a pair of VIPRION systems running vCMP, and each system has three vCMP guests, you can create a separate device group for each pair of equivalent guests. Table 4.2 shows an example.
|Device groups for vCMP||Device group members|
By isolating guests into separate device groups, you ensure that each guest synchronizes and fails over to its equivalent guest. The following table describes the IP addresses that you must specify when configuring redundancy:
|Feature||IP addresses required|
|Device trust||The cluster management IP address of the guest.|
|ConfigSync||The non-floating self IP address on the guest that is associated with VLAN internal on the host.|
|Connection mirroring||For the primary address, the non-floating self IP address on the guest that is associated with VLAN internal on the host. The secondary address is not required, but you can specify any non-floating self IP address on the guest that is associated with an internal VLAN on the host.|
Before you set up device service clustering (DSC), you must configure these BIG-IP components on each device that you intend to include in the device group.
|Hardware, licensing, and provisioning||Devices in a device group must match with respect to product licensing and module provisioning. Heterogeneous hardware platforms within a device group are supported.|
|BIG-IP software version||Each device must be running BIG-IP version 11.x. This ensures successful configuration synchronization.|
|Management IP addresses||Each device must have a management IP address, a network mask, and a management route defined.|
|FQDN||Each device must have a fully-qualified domain name (FQDN) as its host name.|
|User name and password||Each device must have a user name and password defined on it that you will use when logging in to the BIG-IP Configuration utility.|
|root folder properties||The platform properties for the root folder must be set correctly (Sync-Failover and traffic-group-1).|
|VLANs||You must create these VLANs on each device, if you have not already done so:
|Self IP addresses||You must create these self IP addresses on each device, if you have not already
Note: When you create floating self IP addresses, the BIG-IP system automatically adds them to the default floating traffic group, traffic-group-1. To add a self IP address to a different traffic group, you must modify the value of the self IP address Traffic Group property.
Important: If the BIG-IP device you are configuring is accessed using Amazon Web Services, then the IP address you specify must be the floating IP address for high availability fast failover that you configured for the EC2 instance.
|Port lockdown||For self IP addresses that you create on each device, you should verify that the Port Lockdown setting is set to Allow All, All Default, or Allow Custom. Do not specify None.|
|Application-related objects||You must create any virtual IP addresses and optionally, SNAT translation addresses, as part of the local traffic configuration. You must also configure any iApp application services if they are required for your application. When you create these addresses or services, the objects automatically become members of the default traffic group, traffic-group-1.|
|Time synchronization||The times set by the NTP service on all devices must be synchronized. This is a requirement for configuration synchronization to operate successfully.|
|Device certificates||Verify that each device includes an x509 device certificate. Devices with device certificates can authenticate and therefore trust one another, which is a prerequisite for device-to-device communication and data exchange.|
This illustration shows two separate Sync-Failover device groups. In the first device group, only LTM1 processes application traffic, and the two BIG-IP devices are configured to provide active-standby high availability. This means that LTM1 and LTM2 synchronize their configurations, and the failover objects on LTM1 float to LTM2 if LTM1 becomes unavailable.
In the second device group, both LTM1 and LTM2 process application traffic, and the BIG-IP devices are configured to provide active-active high availability. This means that LTM1 and LTM2 synchronize their configurations, the failover objects on LTM1 float to LTM2 if LTM1 becomes unavailable, and the failover objects on LTM2 float to LTM1 if LTM2 becomes unavailable.
Use the tasks in this implementation to create a two-member device group, with two active traffic groups, that syncs the BIG-IP configuration to the peer device and provides failover capability if the peer device goes offline. Note that on a vCMP system, the devices in a specific device group are vCMP guests, one per chassis.
You can specify the local self IP address that you want other devices in a device group to use when mirroring their connections to this device. Connection mirroring ensures that in-process connections for an active traffic group are not dropped when failover occurs. You typically perform this task when you initially set up device service clustering (DSC).
You perform this task when you have more than one type of hardware platform in a device group and you want to configure load-aware failover. Load-aware failover ensures that the BIG-IP system can intelligently select the next-active device for each active traffic group in the device group when failover occurs. As part of configuring load-aware failover, you define an HA capacity to establish the amount of computing resource that the device provides relative to other devices in the device group.
Before you begin this task, verify that:
You perform this task to establish trust among devices on one or more network segments. Devices that trust each other constitute the local trust domain. A device must be a member of the local trust domain prior to joining a device group.
By default, the BIG-IP software includes a local trust domain with one member, which is the local device. You can choose any one of the BIG-IP devices slated for a device group and log into that device to add other devices to the local trust domain. For example, devices A, B, and C each initially shows only itself as a member of the local trust domain. To configure the local trust domain to include all three devices, you can simply log into device A and add devices B and C to the local trust domain. Note that there is no need to repeat this process on devices B and C.
This task establishes failover capability between two or more BIG-IP devices that you intend to run in an active-active configuration. If an active device in a Sync-Failover device group becomes unavailable, the configuration objects fail over to another member of the device group and traffic processing is unaffected. You perform this task on any one of the authority devices within the local trust domain.
Repeat this task for each Sync-Failover device group that you want to create for your network configuration.
|Non-VIPRION||Type a self IP address associated with an internal VLAN (preferably VLAN HA) and the management IP address for the device.|
|VIPRION without vCMP||Type the self IP address for an internal VLAN (preferably VLAN HA) and the management IP addresses for all slots in the VIPRION cluster. Note that if you also configure a multicast address (using the Use Failover Multicast Address setting), then these management IP addresses are not required.|
|VIPRION with vCMP||Type a self IP address that is defined on the guest and associated with an internal VLAN on the host (preferably VLAN HA). You must also specify the management IP addresses for all of the slots configured for the guest. Note that if you also configure a multicast address (using the Use Failover Multicast Address setting), then these management IP addresses are not required.|
You perform this task when you want the selected traffic group on the local device to fail over to another device (that is, switch to a Standby state). Users typically perform this task when no automated method is configured for a traffic group, such as auto-failback or an HA group. By forcing the traffic group into a Standby state, the traffic group becomes active on another device in the device group. For device groups with more than two members, you can choose the specific device to which the traffic group fails over.
You now have a Sync-Failover device group set up with an active-active DSC configuration. In this configuration, each device has a different active traffic group running on it. That is, the active traffic group on one device is the default traffic group (named traffic-group-1), while the active traffic group on the peer device is a traffic group that you create. Each traffic group contains the floating self IP and virtual IP addresses specific to the relevant application.
If one device goes offline, the traffic group that was active on that device becomes active on the other device in the group, and processing for both applications continues on one device.