A common TMOS device service clustering (DSC) implementation is an active-standby configuration, where a single traffic group is active on one of the devices in the device group, and is in a standby state on a peer device. Alternatively however, you can create a second traffic group and activate that traffic group on the peer device. In this active-active configuration, the devices each process traffic for a different application simultaneously. If one of the devices in the device group goes offline, the traffic group that was active on that device fails over to the peer device. The result is that two traffic groups become active on one device.
To implement this DSC implementation, you create a Sync-Failover device group. A Sync-Failover device group provides configuration synchronization and device failover, and optionally, connection mirroring.
The way you configure device service clustering (DSC) on a VIPRION system varies depending on whether the system is provisioned to run the vCMP feature.
On a VIPRION system that is not provisioned for vCMP, the management IP address that you specify for establishing device trust and enabling failover should be the system's primary cluster IP address. This is a floating management IP address.
On a vCMP system, the devices in a device group are virtual devices, known as vCMP guests. You configure config sync and failover to occur between equivalent vCMP guests in separate chassis.
For example, if you have a pair of VIPRION systems running vCMP, and each system has three vCMP guests, you can create a separate device group for each pair of equivalent guests. Table 4.2 shows an example.
|Device groups for vCMP||Device group members|
By isolating guests into separate device groups, you ensure that each guest synchronizes and fails over to its equivalent guest.
The self IP addresses that you specify per guest for config sync and failover should be the self IP addresses that you previously configured on the guest (not the host). Similarly, the management IP address that you specify per guest for device trust and failover should be the cluster IP address of the guest.
Before you set up device service clustering (DSC), you must configure these BIG-IP components on each device that you intend to include in the device group.
|Hardware, licensing, and provisioning||Devices in a device group must match as closely as possible with respect to hardware platform, product licensing, and module provisioning. If you want to configure mirroring, ensure that the hardware platforms of the mirrored devices match.|
|BIG-IP software version||Each device must be running BIG-IP version 11.x. This ensures successful configuration synchronization.|
|Management IP addresses||Each device must have a management IP address, a network mask, and a management route defined.|
|FQDN||Each device must have a fully-qualified domain name (FQDN) as its host name.|
|User name and password||Each device must have a user name and password defined on it that you will use when logging in to the BIG-IP Configuration utility.|
|root folder properties||The platform properties for the root folder must be set correctly (Sync-Failover and traffic-group-1).|
|VLANs||You must create these VLANs on each device, if you have not already done so:
|Self IP addresses||You must create these self IP addresses on each device, if you have not already done so:
Note: When you create floating self IP addresses, the BIG-IP system automatically adds them to the default floating traffic group, traffic-group-1. To add a self IP address to a different traffic group, you must modify the value of the self IP address Traffic Group property.
|Port lockdown||For self IP addresses that you create on each device, you should verify that the Port Lockdown setting is set to Allow All, All Default, or Allow Custom. Do not specify None.|
|Application-related objects||You must create any virtual IP addresses and optionally, SNAT translation addresses, as part of the local traffic configuration. You must also configure any iApp application services if they are required for your application. When you create these addresses or services, the objects automatically become members of the default traffic group, traffic-group-1.|
|Time synchronization||The times set by the NTP service on all devices must be synchronized. This is a requirement for configuration synchronization to operate successfully.|
|Device certificates||Verify that each device includes an x509 device certificate. Devices with device certificates can authenticate and therefore trust one another, which is a prerequisite for device-to-device communication and data exchange.|
This illustration shows two separate Sync-Failover device groups. In the first device group, only LTM1 processes application traffic, and the two BIG-IP devices are configured to provide active-standby high availability. This means that LTM1 and LTM2 synchronize their configurations, and the failover objects on LTM1 float to LTM2 if LTM1 becomes unavailable.
In the second device group, both LTM1 and LTM2 process application traffic, and the BIG-IP devices are configured to provide active-active high availability. This means that LTM1 and LTM2 synchronize their configurations, the failover objects on LTM1 float to LTM2 if LTM1 becomes unavailable, and the failover objects on LTM2 float to LTM1 if LTM2 becomes unavailable.
Use the tasks in this implementation to create a two-member device group, with two active traffic groups, that syncs the BIG-IP configuration to the peer device and provides failover capability if the peer device goes offline. Note that on a vCMP system, the devices in a specific device group are vCMP guests, one per chassis.
Before you begin this task, verify that:
You perform this task to establish trust among devices on one or more network segments. Devices that trust each other constitute the local trust domain. A device must be a member of the local trust domain prior to joining a device group.
By default, the BIG-IP software includes a local trust domain with one member, which is the local device. You can choose any one of the BIG-IP devices slated for a device group and log into that device to add other devices to the local trust domain. For example, devices A, B, and C each initially shows only itself as a member of the local trust domain. To configure the local trust domain to include all three devices, you can simply log into device A and add devices B and C to the local trust domain. Note that there is no need to repeat this process on devices B and C.
This task establishes failover capability between two or more BIG-IP devices. If the active device in a Sync-Failover device group becomes unavailable, the configuration objects fail over to another member of the device group and traffic processing is unaffected. You perform this task on any one of the authority devices within the local trust domain.
Repeat this task for each Sync-Failover device group that you want to create for your network configuration.
When you configure failover settings, you can specify whether you want the BIG-IP system to use a serial cable or the network for failover.
You can also specify, on failover, the amount of time allowed for other vendor switches to learn the MAC address of the newly-active device.
This task causes the selected traffic group on the local device to switch to a standby state. By forcing the traffic group into a standby state, the traffic group becomes active on another device in the device group. For device groups with more than two members, you can choose the specific device to which the traffic group fails over. This task is optional.
You now have a Sync-Failover device group set up with an active-active DSC configuration. In this configuration, each device has a different active traffic group running on it. That is, the active traffic group on one device is the default traffic group (named traffic-group-1), while the active traffic group on the peer device is a traffic group that you create. Each traffic group contains the floating self IP and virtual IP addresses specific to the relevant application.
If one device goes offline, the traffic group that was active on that device becomes active on the other device in the group, and processing for both applications continues on one device.