Applies To:

Show Versions Show Versions

Manual Chapter: Prepare for Data Collection Device Upgrade with Minimal Downtime
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Prepare the data collection device cluster for upgrade with minimal downtime

If you choose the minimal downtime method, you can upgrade your data collection device cluster minimizing the time that the cluster is offline.

Important: You cannot perform the minimal downtime upgrade unless you have at least 3 DCDs in your cluster.

Check the statistics retention policy

Before you begin an update with minimal downtime, you must confirm that your statistics data is replicated to multiple DCDs. Perform this task on the primary BIG-IQ® system.
Note: Perform this task only if you are upgrading from version 5.2 to version 5.3.
  1. At the top of the screen, click System.
  2. On the left, expand BIG-IQ DATA COLLECTION and then select BIG-IQ Data Collection Devices.
    The BIG-IQ Data Collection Devices screen opens to list the data collection devices in the cluster.
  3. Click the Settings button.
    The Settings screen opens to display the current state of the DCD cluster defined for this BIG-IQ device.
  4. On the left, click Statistics Collection.
    The Statistics Collection Status screen displays the percentage of available disk space currently consumed by statistics data for each container.
  5. To change the retention settings for your statistics data, click Configure.
    The Statistics Retention Policy screen opens.
  6. Expand Advanced Settings, and then select the Enable Replicas check box.
    Replicas are copies of a data set that are available to the DCD cluster when one or more devices within that cluster become unavailable. You must allow the system to create these copies to upgrade your DCD cluster with minimal downtime and lost data.
  7. Click Save & Close.
  8. Return to the BIG-IQ Data Collection Devices screen as in step 2. If you just changed the replication policy, the cluster health status icon should be yellow.
    Until the data replicas are distributed to the cluster, the status does not become green.
  9. Wait for the cluster health status icon to turn to green, so you know that the data is safely distributed to multiple DCDs.

Check data collection device health for version 5.2

You can use the BIG-IQ Data Collection screen to review the overall health and status of the data collection devices you've configured. You can use the data displayed on this screen both before and after an upgrade to verify that your data collection device cluster configuration is as you expect it to be.
Note: Perform this task only if you are upgrading from version 5.2 to version 5.3.
  1. At the top of the screen, click System.
  2. On the left, expand BIG-IQ DATA COLLECTION and then select BIG-IQ Data Collection Devices.
    The BIG-IQ Data Collection Devices screen opens to list the data collection devices in the cluster.
  3. Click the Settings button.
    The Settings screen opens to display the current state of the DCD cluster defined for this BIG-IQ device.
  4. Check the cluster health status. If the cluster health is not green, then resolve those issues and then repeat the process until the cluster is operating normally.

Confirm disk space is sufficient for minimal downtime upgrade

As part of preparing to upgrade your data collection device with minimal downtime, you must confirm that there is sufficient disk size in the cluster so that when you take a DCD offline, there is room for its data on other devices in the cluster. If the amount of free space in the cluster is less than the amount of data on any one node, then there is insufficient space to upgrade without downtime. If this is the case, you need to either add DCDs or increase storage space on the existing DCDs.
Important: If your cluster has multiple zones, you must perform the disk space check for each zone.
  1. Use SSH to log in to a device in the cluster.
    You must log in as root to perform this procedure.
  2. Determine the storage space requirement for your DCD cluster using the following command:
    curl localhost:9200/_cat/allocation?v
    shards disk.indices disk.used disk.avail disk.total disk.percent host ip node 
    57 397.5mb 2gb 7.8gb 9.8gb 20 10.10.10.5 10.10.10.5 8637c04c-1b83-4795-b1f0-347ac733fd10 
    56 471.7mb 2.2gb 7.5gb 9.8gb 23 10.10.10.3 10.10.10.3 9d718ba7-5bb9-4866-9aa3-4677a1f60e46 
    56 393mb 2.1gb 7.7gb 9.8gb 21 10.10.10.2 10.10.10.2 8c4e58b4-a005-404f-9a53-6e318ec0e381 
    57 444.2mb 2gb 7.8gb 9.8gb 20 10.10.10.10 10.10.10.10 11ac40f9-5b13-4f9a-a739-0351858ba571
                    
  3. Analyze the storage space requirement for your DCD to determine if there is sufficient disk space.
    In the previous example, there is plenty of space. The DCD consuming the most data is only consuming 2.2 GB, and each of the other DCDs has almost 8 GB free. So when that DCD goes offline to upgrade, the system can move the 2.2 GB of data to the remaining 15.5 GB of free space. If these numbers were reversed, so that the DCD consuming the most storage had 7.8 GB of data, and the remaining DCDs only had 6.3 GB free, there would be insufficient space to move the data when that DCD went offline.
If there is sufficient space, you can proceed. Otherwise, you need to either add DCDs, or add DCD storage space.

Confirm that BIG-IP DCD configuration is correct

In preparing to upgrade your data collection device with minimal downtime, you must confirm that the BIG-IP® device to DCD configuration is correct, so that when a particular DCD is upgraded, data that was being routed to it is automatically routed to another DCD in the cluster. There are two settings of particular concern:
  • Confirm that data sent from the BIG-IP devices is not being sent to just one DCD. Each BIG-IP device must be configured to send data to multiple DCDs.
  • Confirm that the BIG-IP devices are configured with appropriate monitors that allow for traffic to switch to a different DCD when one DCD is taken offline.
  1. Analyze the data routing configuration for all of the BIG-IP devices that send data to your DCD cluster.
  2. If you find a BIG-IP device that is configured to send data to only one DCD, change that configuration before proceeding.
    Refer to the BIG-IP documentation on support.f5.com for details on how to configure the BIG-IP to DCD routing.
  3. Analyze the monitor configuration for all of the BIG-IP devices that send data to your DCD cluster. Make sure that each device is configured to send data to an alternate DCD if one DCD goes offline.
  4. If you find a BIG-IP device that is not configured with appropriate monitors, change that configuration before proceeding.
    Refer to the BIG-IP documentation on support.f5.com for details on how to configure BIG-IP monitors correctly.

Confirm correct minimum master devices setting for version 5.2

In preparing to upgrade your data collection device with minimal downtime, you must confirm that the minimum master devices setting is correctly configured so that when a DCD is upgraded, its data can fail over to another DCD in the cluster.
  1. At the top of the screen, click System.
  2. On the left, expand BIG-IQ DATA COLLECTION and then select BIG-IQ Data Collection Devices.
    The BIG-IQ Data Collection Devices screen opens to list the data collection devices in the cluster.
  3. Click the Settings button.
    The Settings screen opens to display the current state of the DCD cluster defined for this BIG-IQ device.
  4. Confirm that the value of the Minimum Master Eligible Devices setting is correct.
    • If you are upgrading a single zone DCD cluster, confirm that the number of devices is less than or equal to the number of DCDs in the cluster.
    • If you are upgrading a multiple zone DCD cluster, confirm that the number of devices is less or equal to than the number of DCDs in each zone.
  5. If the value is too high, click Update, and type in a number that is less than the number of DCDs in the cluster.
  6. Click the Save & Close button at the bottom of the screen.

Stop snapshot creation

Because of the mixed software version environment that occurs during the upgrade process, if snapshot schedules are configured for the cluster, you should stop creating snapshots before you begin the upgrade to prevent possible issues with your data.
  1. Use SSH to log in to the primary BIG-IQ system for this cluster.
    You must log in as root to perform this procedure.
  2. Retrieve the list of scheduled snapshots using the following command: restcurl cm/shared/esmgmt/es-snapshot-task | grep task-scheduler
    config # restcurl cm/shared/esmgmt/es-snapshot-task | grep task-scheduler 
    "link": "https://localhost/mgmt/shared/task-scheduler/scheduler/0fdf50ec-8a17-3da9-b717-c63637ccc68a"
    "link": "https://localhost/mgmt/shared/task-scheduler/scheduler/0af33352-2f33-32b3-85cb-1281bb88c249"
    "link": "https://localhost/mgmt/shared/task-scheduler/scheduler/2ad770a8-bdb0-3383-99a9-300846eb0972"
    
    In the example here, there are 3 snapshots scheduled.
  3. Stop each of the schedules using the following command: restcurl -X PATCH -d '{"status":"DISABLED"}' shared/task-scheduler/scheduler/<SNAPSHOT ID>
    #restcurl -X PATCH -d '{"status":"DISABLED"}'
    shared/task-scheduler/scheduler/0af33352-2f33-32b3-85cb-1281bb88c249
    { "id": "0af33352-2f33-32b3-85cb-1281bb88c249", "status":"DISABLED", ...}
    
    After you run the command for each scheduled snapshot, no more snapshots are created.
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)