Applies To:

Show Versions Show Versions

Manual Chapter: Prerequisites for BIG-IP Virtual Edition on ESXi
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Host CPU requirements

The host hardware CPU must meet the following requirements.

  • The CPU must have 64-bit architecture.
  • The CPU must have virtualization support (AMD-V or Intel VT-x) enabled.
  • The CPU must support a one-to-one, thread-to-defined virtual CPU ratio, or on single-threading architectures, support at least one core per defined virtual CPU.
  • In VMware ESXi 5.5 and later, do not set the number of virtual sockets to more than 2.
  • If your CPU supports the Advanced Encryption Standard New Instruction (AES-NI), SSL encryption processing on BIG-IP® VE will be faster. Contact your CPU vendor for details about which CPUs provide AES-NI support.

Host memory requirements

The number of licensed TMM cores determines how much memory the host system requires.

Number of cores Memory required
1 2 Gb
2 4 Gb
4 8 Gb
8 16 Gb

Configuring SR-IOV on the hypervisor

To increase performance, you can enable Single Root I/O Virtualization (SR-IOV). You need an SR-IOV-compatible network interface card (NIC) installed and the SR-IOV BIOS must be enabled.
You must also load the ixgbe driver and blacklist the ixgbevf driver.
  1. In vSphere, access the command-line tool, esxcli.
  2. Check to see what the ixgbe driver settings are currently.
    esxcli system module parameters list -m ixgbe
  3. Set the ixgbe driver settings.
    In this example, 16,16 is for a 2 port card with 16 virtual functions.
    esxcli system module parameters set -m ixgbe -p "max_vfs=16,16"
  4. Reboot the hypervisor so that the changes to take effect.
    When you next visit the user interface, the SR-IOV NIC will appear in the Settings area of the guest as a PCI device.
  5. Using vSphere, add a PCI device, and then add two virtual functions.

    05:10.0 | Intel Corporation 82599 Ethernet Controller Virtual Function

    05:10.1 | Intel Corporation 82599 Ethernet Controller Virtual Function

  6. Use either the console command line or user interface to configure the VLANs that will serve as pass-through devices for the virtual function. For each interface and VLAN combination, specify a name and a value.
    • Name - pciPassthru0.defaultVlan
    • Value - 3001
To complete SR-IOV configuration, after you deploy BIG-IP® VE, you must add three PCI device NICs and map them to your networks.

Virtual machine memory requirements

The guest should have a minimum of 4 GB of RAM for the initial 2 virtual CPUs. For each additional CPU, you should add an additional 2 GB of RAM.

If you license additional modules, you should add memory.
Provisioned memory Supported modules Details
4 GB or fewer Two modules maximum. AAM can be provisioned as standalone only.
4-8 GB Three modules maximum.

BIG-IP® DNS does not count toward the module limit.

Exception: Application Acceleration Manager™ (AAM®) cannot be provisioned with any other module; AAM is standalone only.

8 GB Three modules maximum. BIG-IP DNS does not count toward the module-combination limit.
12 GB or more All modules. N/A
Important: To achieve licensing performance limits, all allocated memory must be reserved.

Virtual machine storage requirements

The BIG-IP® modules you want to use determine how much storage the guest needs.

Provisioned storage Supported modules Details
8 GB Local Traffic Manager™ (LTM®) module only; no space for LTM upgrades. You can increase storage if you need to upgrade LTM or provision additional modules.
38 GB LTM module only; space for installing LTM upgrades. You can increase storage if you decide to provision additional modules. You can also install another instance of LTM on a separate partition.
139 GB All modules and space for installing upgrades. The Application Acceleration Manager™ (AAM®) module requires 20 GB of additional storage dedicated to AAM. For information about configuring the Datastore volume, see Disk Management for Datastore on the AskF5™ Knowledge Base (http://support.f5.com) for details.

For production environments, virtual disks should be deployed Thick (allocated up front). Thin deployments are acceptable for lab environments.

Note: To change the disk size after deploying the BIG-IP system, see Increasing disk space for BIG-IP® VE.

Virtual machine network interfaces

When you deploy BIG-IP® VE, a specific number of virtual network interfaces (vNICs) are available.

Four vNICs are automatically defined for you.
  • For management access, one VMXNET3 vNIC or Flexible vNIC.
  • For dataplane access, three VMXNET3 vNICs.

Each virtual machine can have a maximum of 10 virtual NICs.

Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)