BIG-IP® Virtual Edition (VE) is a version of the BIG-IP system that runs as a virtual machine in specifically-supported hypervisors. BIG-IP VE creates a virtual instance of a hardware-based BIG-IP system running a VE-compatible version of BIG-IP® software.
Each time there is a new release of BIG-IP® Virtual Edition (VE) software, it frequently includes support for additional hypervisor management products. The Virtual Edition and Support Matrix on the AskF5™ website, http://support.f5.com, details which hypervisors are supported for each release.
The Xen Project virtual machine guest environment for the BIG-IP® Virtual Edition (VE), at minimum, must include:
For production licenses, F5 Networks suggests using the maximum configuration limits for the BIG-IP VE system. For lab editions, required reserves can be less. For each virtual machine, the Xen Project virtual machine guest environment permits a maximum of 10 network adapters. You can either deploy these as a management port and 9 dataplane ports or a management port, 8 dataplane ports, and an HA port.
There are also some maximum configuration limits to consider for deploying a BIG-IP VE virtual machine, such as:
This table details the capabilities and limitations of the disk space options you can choose.
|Provisioned disk space||Capabilities and Limitations||Special Considerations|
|8 GB||The Local Traffic Manager (LTM®) module is supported, but there is no space available for installing LTM upgrades.||Disk space can be increased if you need upgrade LTM, or decide to provision additional modules.|
|37 GB||The LTM module is supported. There is also sufficient space available for installing LTM upgrades.||Disk space can be increased if you decide to provision additional modules. You can also install another instance of LTM on a separate partition.|
|139 GB (OS only) or 159 GB (with Datastor)||All modules and combinations are supported. There is also sufficient space available for installing upgrades.||If you plan to use the Acceleration Module (AM) in addition to other modules, you must add a second 20 GB disk in addition to the 139 GB operating system disk used by the other modules. The 20 GB volume serves as a dedicated Datastore for AM. Do not use this volume for any other purpose. If you need additional space, increase the disk space allotted to this VE. For information on configuring the Datastore volume, refer to Disk Management for Datastore published on the AskF5™ web site, http://support.f5.com.|
The general memory requirement recommendation for BIG-IP® Virtual Edition (VE) is 2 GB per virtual CPU. Additionally, the following memory guidelines may be helpful in setting expectations, based on which modules are licensed on VE guests.
|Provisioned memory||Supported module combinations||Module specific concerns|
|12 GB or more||All module combinations are fully supported.||N/A|
|8 GB||Provisioning more than three modules together is not supported.||BIG-IP® DNS and Link Controller™ do not count toward the module-combination limit.|
|More than 4 GB, but less than 8 GB||Provisioning more than three modules together is not supported. (See module-specific concerns relating to AAM.)||
Application Acceleration Manager™ (AAM) cannot be provisioned with any other module; AAM™ can only be provisioned as standalone.
BIG-IP DNS and Link Controller do not count toward the module-combination limit.
|4 GB or less||Provisioning more than two modules together is not supported.||AAM can only be provisioned as dedicated.|
If you want to disable support for TCP Segmentation Offloading (TSO), you must submit a tmsh command, because the TSO feature is enabled by default. Note that enabling TSO support also enables support for large receive offload (LRO) and Jumbo Frames.
You must have the Admin user role to enable or disable TSO support for a hypervisor.
If you want support for SR-IOV, in addition to using the correct hardware and BIOS settings, you must configure hypervisor settings before you set up the guests.
You must have an SR-IOV-compatible network interface card (NIC) installed, and the SR-IOV BIOS enabled before you can configure SR-IOV support.
Refer to the documentation included with your hypervisor operating system for information on support and configuration for SR-IOV.