The most common TMOS device service clustering (DSC) implementation is an active-standby configuration, where a single traffic group is active on one of the devices in the device group and is in a standby state on a peer device. If failover occurs, the standby traffic group on the peer device becomes active and begins processing the application traffic.
To implement this DSC implementation, you can create a Sync-Failover device group. A Sync-Failover device group with two or more members and one traffic group provides configuration synchronization and device failover, and optionally, connection mirroring.
If the device with the active traffic group goes offline, the traffic group becomes active on a peer device, and application processing is handled by that device.
The way you configure device service clustering (DSC) (also known as redundancy) on a VIPRION system varies depending on whether the system is provisioned to run the vCMP feature.
For a device group that consists of VIPRION systems that are not licensed and provisioned for vCMP, each VIPRION cluster constitutes an individual device group member. The following table describes the IP addresses that you must specify when configuring redundancy.
|Feature||IP addresses required|
|Device trust||The primary floating management IP address for the VIPRION cluster.|
|ConfigSync||The unicast non-floating self IP address assigned to VLAN internal.|
|Connection mirroring||For the primary address, the non-floating self IP address that you assigned to VLAN HA. The secondary address is not required, but you can specify any non-floating self IP address for an internal VLAN..|
On a vCMP system, the devices in a device group are virtual devices, known as vCMP guests. You configure device trust, config sync, failover, and mirroring to occur between equivalent vCMP guests in separate chassis.
For example, if you have a pair of VIPRION systems running vCMP, and each system has three vCMP guests, you can create a separate device group for each pair of equivalent guests. Table 4.2 shows an example.
|Device groups for vCMP||Device group members|
By isolating guests into separate device groups, you ensure that each guest synchronizes and fails over to its equivalent guest. The following table describes the IP addresses that you must specify when configuring redundancy:
|Feature||IP addresses required|
|Device trust||The cluster management IP address of the guest.|
|ConfigSync||The non-floating self IP address on the guest that is associated with VLAN internal on the host.|
|Connection mirroring||For the primary address, the non-floating self IP address on the guest that is associated with VLAN internal on the host. The secondary address is not required, but you can specify any non-floating self IP address on the guest that is associated with an internal VLAN on the host.|
Before you set up device service clustering (DSC), you must configure these BIG-IP components on each device that you intend to include in the device group.
|Hardware, licensing, and provisioning||Devices in a device group must match with respect to product licensing and module provisioning. Heterogeneous hardware platforms within a device group are supported.|
|BIG-IP software version||Each device must be running BIG-IP version 11.x. This ensures successful configuration synchronization.|
|Management IP addresses||Each device must have a management IP address, a network mask, and a management route defined.|
|FQDN||Each device must have a fully-qualified domain name (FQDN) as its host name.|
|User name and password||Each device must have a user name and password defined on it that you will use when logging in to the BIG-IP Configuration utility.|
|root folder properties||The platform properties for the root folder must be set correctly (Sync-Failover and traffic-group-1).|
|VLANs||You must create these VLANs on each device, if you have not already done so:
|Self IP addresses||You must create these self IP addresses on each device, if you have not already
Note: When you create floating self IP addresses, the BIG-IP system automatically adds them to the default floating traffic group, traffic-group-1. To add a self IP address to a different traffic group, you must modify the value of the self IP address Traffic Group property.
Important: If the BIG-IP device you are configuring is accessed using Amazon Web Services, then the IP address you specify must be the floating IP address for high availability fast failover that you configured for the EC2 instance.
|Port lockdown||For self IP addresses that you create on each device, you should verify that the Port Lockdown setting is set to Allow All, All Default, or Allow Custom. Do not specify None.|
|Application-related objects||You must create any virtual IP addresses and optionally, SNAT translation addresses, as part of the local traffic configuration. You must also configure any iApp application services if they are required for your application. When you create these addresses or services, the objects automatically become members of the default traffic group, traffic-group-1.|
|Time synchronization||The times set by the NTP service on all devices must be synchronized. This is a requirement for configuration synchronization to operate successfully.|
|Device certificates||Verify that each device includes an x509 device certificate. Devices with device certificates can authenticate and therefore trust one another, which is a prerequisite for device-to-device communication and data exchange.|
Use the tasks in this implementation to create a two-member device group, with one active traffic group, that syncs the BIG-IP configuration to the peer device and provides failover capability if the peer device goes offline. Note that on a vCMP system, the devices in a specific device group are vCMP guests, one per chassis.
You can specify the local self IP address that you want other devices in a device group to use when mirroring their connections to this device. Connection mirroring ensures that in-process connections for an active traffic group are not dropped when failover occurs. You typically perform this task when you initially set up device service clustering (DSC).
You perform this task when you have more than one type of hardware platform in a device group and you want to configure load-aware failover. Load-aware failover ensures that the BIG-IP system can intelligently select the next-active device for each active traffic group in the device group when failover occurs. As part of configuring load-aware failover, you define an HA capacity to establish the amount of computing resource that the device provides relative to other devices in the device group.
Before you begin this task, verify that:
You perform this task to establish trust among devices on one or more network segments. Devices that trust each other constitute the local trust domain. A device must be a member of the local trust domain prior to joining a device group.
By default, the BIG-IP software includes a local trust domain with one member, which is the local device. You can choose any one of the BIG-IP devices slated for a device group and log into that device to add other devices to the local trust domain. For example, devices Bigip_1, Bigip_2, and Bigip_3 each initially shows only itself as a member of the local trust domain. To configure the local trust domain to include all three devices, you can simply log into device Bigip_1 and add devices Bigip_2 and Bigip_3 to the local trust domain; there is no need to repeat this process on devices Bigip_2 and Bigip_3.
This task establishes failover capability between two or more BIG-IP devices. If an active device in a Sync-Failover device group becomes unavailable, the configuration objects fail over to another member of the device group and traffic processing is unaffected. You perform this task on any one of the authority devices within the local trust domain.
Repeat this task for each Sync-Failover device group that you want to create for your network configuration.
You typically perform this task during initial Device Service Clustering (DSC) configuration, to specify the local IP addresses that you want other devices in the device group to use for continuous health-assessment communication with the local device or guest. You must perform this task locally on each device in the device group.
|Appliance without vCMP||Type a static self IP address associated with an internal VLAN (preferably VLAN HA) and the static management IP address currently assigned to the device.|
|Appliance with vCMP||Type a static self IP address associated with an internal VLAN (preferably VLAN HA) and the unique management IP address currently assigned to the guest.|
|VIPRION without vCMP||Type a static self IP address associated with an internal VLAN (preferably VLAN HA). If you choose to specify unicast addresses only (and not a multicast address), you must also type the existing, static management IP addresses that you previously configured for all slots in the cluster. If you choose to specify one or more unicast addresses and a multicast address, then you do not need to specify the existing, per-slot static management IP addresses when configuring addresses for failover communication.|
|VIPRION with vCMP||Type a self IP address that is defined on the guest and associated with an internal VLAN on the host (preferably VLAN HA). If you choose to specify unicast failover addresses only (and not a a multicast address), you must also type the existing, virtual static management IP addresses that you previously configured for all slots in the guest's virtual cluster. If you choose to specify one or more unicast addresses and a multicast address, you do not need to specify the existing, per-slot static and virtual management IP addresses when configuring addresses for failover communication.|
You now have a Sync-Failover device group set up with an active-standby DSC configuration. This configuration uses the default floating traffic group (named traffic-group-1), which contains the application-specific floating self IP and virtual IP addresses, and is initially configured to be active on one of the two devices. If the device with the active traffic group goes offline, the traffic group becomes active on the other device in the group, and application processing continues.