Manual Chapter : Deploying BIG-IP Virtual Edition

Applies To:

Show Versions Show Versions

BIG-IP AAM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP APM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP GTM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP Analytics

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP LTM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP AFM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP PEM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1

BIG-IP ASM

  • 11.6.4, 11.6.3, 11.6.2, 11.6.1
Manual Chapter

Deploying BIG-IP Virtual Edition

Host machine requirements and recommendations

To successfully deploy and run the BIG-IP® VE system, the host system must satisfy minimum requirements.

The host system must include:

  • RHEL, Ubuntu, Debian, or CentOS with the KVM package. The Virtual Edition and Supported Hypervisors Matrix, published on the AskF5™ web site, http://support.f5.com identifies the Linux versions that are supported as well as which operating systems provide support for SR-IOV and TSO.
  • Virtual Machine Manager®
  • Connection to a common NTP source (this is especially important for each host in a redundant system configuration)
The hypervisor CPU must meet the following requirements:
  • Use 64-bit architecture.
  • Have support for virtualization (AMD-V or Intel VT-x) enabled.
  • Support a one-to-one thread-to-defined virtual CPU ratio, or (on single-threading architectures) support at least one core per defined virtual CPU.
  • If you use an Intel processor, it must be from the Core (or newer) workstation or server family of CPUs.

SSL encryption processing on your VE will be faster if your host CPU supports the Advanced Encryption Standard New Instruction (AES-NI). Contact your CPU vendor for details on which CPUs provide AES-NI support.

The hypervisor memory requirement depends on the number of licensed TMM cores. The table describes these requirements.

Number of Cores Memory Required
1 2 Gb
2 4 Gb
4 8 Gb
8 16 Gb

About BIG-IP VE KVM deployment

To deploy the BIG-IP® Virtual Edition (VE) system on Linux KVM, you need to perform these tasks:

  • Verify the host machine requirements.
  • Deploy an instance of the BIG-IP system as a virtual machine on a host system.
  • Power on the BIG-IP VE virtual machine.
  • Assign a management IP address to the BIG-IP VE virtual machine.

After you complete these tasks, you can log in to the BIG-IP VE system and run the Setup utility. Using the Setup utility, you can perform basic network configuration tasks, such as assigning VLANs to interfaces.

Deploying the BIG-IP VE virtual machine

To create an instance of the BIG-IP system that runs as a virtual machine on the host system,complete the steps in this procedure.

Important: Do not modify the configuration of the KVM guest environment with settings less powerful than the ones recommended in this document. This includes the settings for the CPU, RAM, and network adapters. Doing so might produce unexpected results.
  1. In a browser, open the F5 Downloads page (https://downloads.f5.com).
  2. Download the BIG-IP VE file package ending with qcow2.zip.
  3. Extract the file from the Zip archive and save it where your qcow2 files reside on the KVM server.
  4. Use VNC to access the KVM server, and then start Virt Manager.
  5. Right click localhost (QEMU), and from the popup menu, select New.
    The Create a new virtual machine, Step 1 of 4 dialog box opens.
  6. In the Name field, type a name for the connection.
  7. Select import existing disk image as the method for installing the operating system, and click Forward.
    The Create a new virtual machine, Step 2 of 4 dialog box opens
  8. Type in the path to the extracted qcow file, or click Browse to navigate to the path location; select the file, and then click the Choose Volume button to fill in the path.
  9. In the OS type setting, select Linux, for the Version setting, select Red Hat Enterprise Linux 6, and click Forward.
    The Create a new virtual machine, Step 3 of 4 dialog box opens.
  10. In the Memory (RAM) field, type the appropriate amount of memory (in megabytes) for your deployment. (For example 4096, for a 4GB deployment) . From the CPUs list, select the number of CPU cores appropriate for your deployment, and click Forward.
    The Create a new virtual machine, Step 4 of 4 dialog box opens.
  11. Select Customize configuration before install, and click the Advanced options arrow.
  12. Select the network interface adapter that corresponds to your management IP address, and click Finish.
    The Virtual Machine configuration dialog box opens.
  13. (If SR-IOV support is required, skip steps 13 - 15 and perform step 16 - 17 instead.) Click Add Hardware. When The Add New Virtual Hardware dialog box opens, select Network to access controls for specifying a new network interface device.
  14. From the Host device list, select the network interface adapter that corresponds to your external network, and from the Device model list, select virtio. Then click Finish.
  15. Repeat the last two steps, two more times. The first time you repeat them, select the network interface adapter that corresponds to your internal network. The second time you repeat them, select the network interface adapter that corresponds to your HA network.
  16. (Perform steps 16 - 17 only if SR-IOV support is required.) Click Add Hardware. When The Add New Virtual Hardware dialog box opens, select PCI Host Device, and then select the PCI device that corresponds to the virtual function mapped to your host device's external VLAN. Then click Finish.
  17. Repeat step 16 two more times. The first time you repeat it, select the PCI device that corresponds to the virtual function mapped to your host device's internal VLAN. The second time you repeat it, select the PCI device that corresponds to the virtual function mapped to your host device's HA VLAN.
  18. From the left pane, select Disk 1.
  19. Click the Advanced options button.
  20. From the Disk bus list, select Virtio.
  21. From the Storage format list, select qcow2.
  22. Click Apply.
  23. Click Begin Installation.
    Virtual Machine Manager creates the virtual machine just as you configured it.

Powering on the virtual machine

You power on the virtual machine so that you can begin assigning IP addresses.
  1. Open Virtual Machine Manager.
  2. Right click the virtual machine that you want to power on, and then from the popup menu, select Open.
    The virtual machine opens, but in a powered-off state.
  3. From the toolbar, select the Power on the virtual machine (right-arrow) button.
    The virtual machine boots and then displays a login prompt.

There are two default accounts used for initial configuration and setup:

  • The root account provides access locally, or using SSH, or using the F5 Configuration utility. The root account password is default.
  • The admin account provides access through the web interface. The admin account password is admin.

You should change passwords for both accounts before bringing a system into production.

Assigning a management IP address to a virtual machine

The virtual machine needs an IP address assigned to its virtual management port.
Tip: The default configuration for new deployments and installations is for DHCP to acquire the management port IP address.
  1. At the login prompt, type root.
  2. At the password prompt, type default.
  3. Type config and press Enter.
    The F5 Management Port Setup screen opens.
  4. Click OK.
  5. If you want DHCP to automatically assign an address for the management port, select Yes. Otherwise, select No and follow the instructions for manually assigning an IP address and netmask for the management port.

You can use a hypervisor generic statement such as tmsh show sys management-ip to confirm that the management IP address has been properly assigned.

Tip: F5 Networks highly recommends that you specify a default route for the virtual management port, but it is not required for operating the virtual machine.

Turning off LRO/GRO from the VE guest to optimize PEM performance

Before you can access the VE guest to turn off LRO and GRO, you must have assigned the guest a management IP address.
To optimize performance if you use the virtual machine with the PEM module, you must turn off large receive offload (LRO) and generic receive offload (GRO) for each network interface card (NIC) that is used to pass traffic. You must also use SR-IOV. Although there are a number of ways to turn off LRO, the most reliable way is to connect to the VE guest and use the ethtool utility.
  1. Use an SSH tool to access the management IP address of the BIG-IP® VE system.
  2. From the command line, log in as root.
  3. Use the ethtool to turn off rx-checksumming for the NIC.
    ethtool -K eth<X> rx off
    Important: In this example, substitute the NIC number for <X>.
  4. Use the ethtool to turn off LRO for the NIC.
    ethtool -K eth<X> lro off
    Important: In this example, substitute the NIC number for <X>.
  5. Use the ethtool to turn off GRO for the NIC.
    ethtool -K eth<X> gro off
    Important: In this example substitute the NIC number for <X>.
  6. Use the ethtool to confirm that LRO and GRO are successfully turned off for the NIC.
    ethtool -k eth<X>
    In the system response to your command, you should see this info:

    generic-receive-offload: off

    large-receive-offload: off

    If either of these responses is on, your attempt to turn them off was not successful.
    Important: In this example substitute the NIC number for <X>.
  7. Repeat the previous three steps for each of the NICs that the BIG-IP VE uses to pass traffic.

With LRO and GRO successfully turned off, the performance of the PEM module on the BIG-IP VE system will have better performance and stability.

You can achieve optimum performance (throughput and stability) with the PEM module only if you enable SR-IOV.