Applies To:

Show Versions Show Versions

Manual Chapter: Getting Started with BIG-IP Virtual Edition
Manual Chapter
Table of Contents   |   Next Chapter >>

What is BIG-IP Virtual Edition?

BIG-IP Virtual Edition (VE) is a version of the BIG-IP system that runs as a virtual machine in specifically-supported hypervisors. BIG-IP VE virtualizes a hardware-based BIG-IP system running a VE-compatible version of BIG-IP software.

Note: The BIG-IP VE product license determines the number of cores and the maximum allowed throughput rate. To view the rate limit, you can display the BIG-IP VE licensing page within the BIG-IP Configuration utility. Lab editions have no guarantee of throughput rate and are not supported for production environments.
Number of Cores Throughput Memory Required
1 200 MPS 2 Gig
2 1 GPS 2 Gig
4 3 GPS 8 Gig
8 5 GPS 16 Gig

About BIG-IP VE compatibility with VMware hypervisor products

Each time there is a new release of BIG-IP Virtual Edition (VE) software, it includes support for additional hypervisor management products. The Virtual Edition and Supported Hypervisors Matrix on the AskF5 website, http://support.f5.com, details which hypervisors are supported for each release.

Important: Hypervisors other than those identified in the matrix are not supported with this BIG-IP version; installation attempts on unsupported platforms might not be successful.

About the hypervisor guest definition requirements

The VMware virtual machine guest environment for the BIG-IP Virtual Edition (VE), at minimum, must include:

  • 2 x virtual CPUs
  • 4 GB RAM
  • 1 x VMXNET3 virtual network adapter or Flexible virtual network adapter (for management)
  • 1 x virtual VMXNET3 virtual network adapter (three are configured in the default deployment for dataplane network access)
  • 1 x 100 GB SCSI disk, by default
  • 1 x 50 GB SCSI optional secondary disk, which might be required as a datastore for specific BIG-IP modules. For information about datastore requirements, refer to the BIG-IP module's documentation.
Important: Not supplying at least the minimum virtual configuration limits will produce unexpected results.

For production licenses, F5 Networks suggests using the maximum configuration limits for the BIG-IP VE system. Reservations can be less for lab editions. For each virtual machine, the VMware virtual machine guest environment permits a maximum of 10 virtual network adapters (either 10 VMXNET3 with 1 management + 9 dataplane or 1 Flexible management + 9 VMXNET3 dataplane).

There are also some maximum configuration limits to consider for deploying a BIG-IP VE virtual machine, such as:

  • CPU reservation can be up to 100 percent of the defined virtual machine hardware. For example, if the hypervisor has a 3 GHz core speed, the reservation of a virtual machine with 2 CPUs can be only 6 GHz or less.
  • To achieve licensing performance limits, all allocated RAM must be reserved.
  • For production environments, virtual disks should be deployed Thick (allocated up front). Thin deployments are acceptable for lab environments.
Important: There is no longer any limitation on the maximum amount of RAM supported on the hypervisor guest.

About TCP Segmentation Offloading support

If you want to disable support for TCP Segmentation Offloading (TSO), you must submit a tmsh command, because the TSO feature is enabled by default. Note that enabling TSO support also enables support for large receive offload (LRO) and Jumbo Frames.

Configuring a hypervisor for TSO support

You must have the Admin user role to enable or disable TSO support for a hypervisor.

Using the tmsh command sys db, you can turn TSO support on, off, or check to see whether support is currently enabled.
  1. To determine whether TSO support is currently enabled, use the tmsh list command. list sys db tm.tcpsegmentationoffload
  2. To enable support for TSO, use the tmsh enable command. sys db tm.tcpsegmentationoffload enable
  3. To disable support for TSO, use the tmsh disable command. sys db tm.tcpsegmentationoffload disable

About SR-IOV support

If you want support for SR-IOV, in addition to using the correct hardware and BIOS settings, you must configure hypervisor settings before you set up the guests.

Configuring a hypervisor for SR-IOV support
You must have an SR-IOV-compatible network interface card (NIC) installed and the SR-IOV BIOS enabled before you can configure SR-IOV support.

From the hypervisor console use esxcli (the vSphere command line interface tool) commands to set the system module parameters for max_vfs.

  1. Check to see what the ixgbe driver settings are currently. esxcli system module parameters list -m ixgbe
  2. Check to see what the ixgbe driver settings are currently. In this example, 16,16 is for a 2 port card with 16 virtual functions. esxcli system module parameters set -m ixgbe -p "max_vfs=16,16"
  3. Reboot the hypervisor so that the changes to take effect. When you next visit the user interface, the SR-IOV NIC will appear in the Settings section of the Guest as a PCI device.
  4. Using the VMware hypervisor user interface, add a PCI device, and then add two virtual functions. 05:10.0 | Intel Corporation 82599 Ethernet Controller Virtual Function 05:10.1 | Intel Corporation 82599 Ethernet Controller Virtual Function
  5. Use either the console command line or user interface to configure the VLANs that will serve as pass through devices for the virtual function. For each interface and VLAN combination, specify a name and a value.
    • Name - pciPassthru0.defaultVlan
    • Value - 3001
You can now power on the virtual machine and begin deploying it.
Table of Contents   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)