Original Publication Date: 12/06/2016
BIG-IP Virtual Edition (VE) is a version of the BIG-IP system that runs as a virtual machine. Supported modules include Local Traffic Manager, BIG-IP DNS (formerly Global Traffic Manager), Application Security Manager, Access Policy Manager, Application Acceleration Manager, Policy Enforcement Manager, Application Firewall Manager, and Analytics. BIG-IP VE includes all features of device-based BIG-IP modules running on standard BIG-IP TMOS, except as noted in release notes and product documentation. BIG-IP VE includes all features of device-based BIG-IP modules running on standard BIG-IP TMOS, except as noted in release notes and product documentation.
This version of the software is supported in the following configurations. For a list of VE hypervisor support, see the Virtual Edition and Supported Hypervisors Matrix
All licensable module-combinations may be run on BIG-IP Virtual Edition (VE) guests provisioned with 12 GB or more of memory.
The following guidelines apply to VE guests configured with 8 GB of memory.
The following guidelines apply to VE guests provisioned with less than 8 GB and more than 4 GB of memory.
The following guidelines apply to VE guests provisioned with 4 GB or less of memory.
SOL14592: Compatibility between BIG-IQ and BIG-IP releases provides a summary of version compatibility for specific features between the BIG-IQ system and BIG-IP releases.
For a list of Virtual Edition (VE) hypervisor support, see the Virtual Edition and Supported Hypervisors Matrix.
BIG-IP VE is now available in the Microsoft Azure Marketplace. Any new (BYOL) VE license can be used with the images available in the marketplace.
This release supports configuration of BIG-IP VE with a single NIC. In this configuration, networking objects (vNIC 1.0, an internal VLAN, and an internal self IP) are created automatically for you. This enables quicker creation of VE configurations, as well as allowing VE to run in Microsoft Azure. Single NIC is currently only available for use in Amazon AWS and Microsoft Azure.
This release provides VHDX Virtual Hard Disk format support for Hyper-V, which improves performance on Windows Server 2012 and provides protection against file corruption related to power failures by continuously keeping track of updates in the metadata. Although there are no VHDX images on the Downloads site, you can convert the current VHD images to VHDX.
|361367||Partitions are made with an 8 MB boundary.|
|413024||To correctly decompress a *.vhd.zip file whose resulting file nears the 4 GB size, use a tool that supports Zip64 decompression. For example, UnZip 6.0 (or later), provided by Info-ZIP, supports Zip64 decompression.|
|442871||"Extended the Linux kernel to provide details about the actual hypervisor to BIG-IP user software so that the BIG-IP user software properly recognizes the installed VE guest as running on a known hypervisor. Important: If you have used the workaround and are licensed removing the workaround *may* require a license change."|
|470627||Incorrect and benign log message of bandwidth utilization exceeded when licensed with rate limit in Virtual Edition no longer occurs.|
|471860||When you disable an interface, the state shows DISABLED. When you enable that interface, the indication for the interface now shows ENABLED.|
|475829||The public key for ssh access is obtained from AWS metadata service on 1st boot.|
|476126||The latest Emulex NIC driver was included in 11.5.1-HF5. It supports SR-IOV and VLAN tagging when Emulex NICs are used.|
|478896||The internal/dev license for Hourly Billing AMIs has been replaced with proper production license.|
|481073||Add needed attributes to AMI name during generation.|
|482233||Improving internal build script to generate Cloud images.|
|482434||Throughput and new connections per/sec are now comparable in AWS for SR-IOV enabled instances and in other instances.|
|482943||Internal build changes when deploying to Cloud|
|484399||OVA will only create 1 slot and leave the remaining disk space free.|
|484733||The reassignment of IP addresses for forwarding virtual servers with SNATs defined in the configuration now occurs as expected in Amazon Web Services (AWS).|
|498992||Added more logging details for AWS failover failure to assist in detecting problems in failover.|
|513790||The ssh-in session is no longer terminated when its packets are fragmented, for example, starting/stopping/restarting TMM and MCPD, and others.|
|519510||Change in L4 packet header offset, resulting from VLAN header insertion, is being accounted for to verify checksum.|
|520817||The maximum size of the datastor page cache has been capped at about 10 Gigabytes so as to mitigate the risk of this event occurring.|
|531986||The problem with default tmm route breaking Hourly licenses has been resolved with the fix. The default tmm route no longer affects the Hourly license.|
|224507||When Virtual Editing (VE) is deployed on VMware, the management port might not correctly reflect the uplink port speed of the vSwitch that it is connected to. VE deployed on VMware. This should have no adverse affects on actual management port traffic. Workaround: None.|
|352856||Errors occur when migrating SCF files between different BIG-IP Virtual Edition (VE) hypervisor software. This occurs on BIG-IP VE. "The configuration does not load, and the system posts the following error: BIGpipe interface creation error: 01070318:3: 'The requested media for interface 1.1 is invalid.'" Workaround: To work around this, remove the entire line that contains 'media fixed' statements for each interface. When the media capabilities are removed from the SCF before load, no error occurs.|
|358355||When deployed as a Microsoft Hyper-V virtual machine, BIG-IP Virtual Edition (VE) must be configured with Static Memory Allocation. The use of Dynamic Memory Allocation is unsupported and might cause issues. Dynamic Memory Allocation. Dynamic Memory Allocation is unsupported and might cause issues. Workaround: None.|
|364704||Certain hypervisors support a snapshot of the virtual machine taken with the active state of the memory. On VMware, this temporarily freezes the virtual machine. This might produce undesired results. Taking a snapshot of the virtual machine's memory on VMware. Pauses the virtual machine, which might produce undesired results. Workaround: To avoid this problem on VMware hypervisors, do not include the virtual machine's memory when taking snapshots. On VMware, uncheck the option: Snapshot the virtual machine's memory.|
|366403||After modifying the BIG-IP system topology by adding or removing Network Interfaces, the interface numbering might appear out of order and NICs may appear that are no longer present. Adding or removing Network Interfaces. Usually the fifth NIC will be the first to induce the problem. Interface numbering might appear out of alignment with the previous boot of the VE. NICs may appear that are no longer present. This impact can be seen even after reconfiguring the VLAN interfaces on the BIG-IP VE to match the new topology and MAC layout. After a binary MCPD database has been created, the system may not correctly detect the change even after a subsequent reboot. Workaround: To ensure that the VE system properly detects the new or removed interfaces, run the command 'rm /var/db/mcpd*' at the BIG-IP VE command prompt, and then reboot the VE. After a new mcpdb file has been created, the VLAN interfaces may need to be reconfigured to map to the correct networks, either on the hypervisor, BIG-IP VE, or both. Interface mapping can be viewed by comparing the MAC addresses of the VE interfaces to the same MAC addresses displayed in the hypervisor configuration for the Virtual Machine definition that the VE resides in. The BIG-IP VE MAC addresses can be found in the BIG-IP Configuration utility on the Network :: Interface page, via tmsh, or other resources, such as iControl and iControl REST.|
|371458||On a XenServer Host, all interfaces are expected to show up as 100TX-FD within tmsh. XenServer Host. All application traffic handling interfaces will be shown with a media speed of 100 and an Active Duplex of half in the GUI for this release. This speed rating is simply cosmetic and not actually reflective of the speeds and duplex for BIG-IP VE on a XenServer host. The actual link is a high speed internal connection via a Virtual Network Interface within the hypervisor at speeds greater than 100 Mbps. Workaround: None.|
|371631||BIG-IP Virtual Edition (VE) may incorrectly report the interface media duplex settings as none. The General Properties may show an incorrect Active Duplex setting when you navigate to Network :: Interfaces, and then click the interface. The output from the tmsh show network interface all-properties command may show incorrect information in the Media column. Running the command 'show net interface all-properties'. You are unable to confirm the current duplex setting of an interface. Workaround: To work around this issue, you can determine the interface media duplex setting for VE configurations not involving SR-IOV by running the following command: tmsh list net interface. Note: This workaround is valid only for VE configurations and only reports the VE's reported link state. A VM cannot determine any vSwitch's upstream link state via its own link state. VE knows about the link between it and the vSwitch, except in SR-IOV deployments, where there is no vSwitch and the link is direct.|
|372540||Migration of BIG-IP VE, whether live or powered off, commonly incurs an innocuous warning message similar to this on vSphere hypervisors: Virtual Ethernet card: 'Network adapter 1' is not supported. Migration of BIG-IP VE, whether live or powered off. This is not a limitation of the host in general, but of the virtual machine's configured guest OS on the selected host." This message is benign and can safely be ignored. Workaround: None.|
|394817||Virtual Edition (VE) now supports CMP (that is, multiple TMMs running on the same device). For rate-limited licenses, the throughput rate is divided by the number of TMMs, so each TMM is capped at a fraction of the total licensed limit. VE with CMP enabled and a rate-limited license. After enabling CMP on VE, maximum throughput for one TCP/UDP connection is decreased by the TMM count. For example, If a 200M license with one connection has a throughput of 180Mbits/s before enabling CMP, then for two TMMs the expected throughput would be 90 Mbits/s, and with four TMMs, the expected throughput would be 45 Mbit/s. This is expected functionality. Workaround: None.|
|409234||FastL4 Virtual Servers might experience very low throughput on Virtual Edition (VE) with TCP Segmentation Offload disabled. VE, with at least one FastL4 virtual server configured, and TCP Segmentation Offload (TSO) disabled in the TMM (sys db tm.tcpsegmentationoffload). Numerous Transmit Datagram Errors for the FastL4 profile (tmsh show ltm profile FastL4). FastL4 virtual servers affected might have very low throughput, which might occur if the hypervisor has Large Receive Offload (LRO) enabled. This is a hypervisor configuration issue. Low throughput might also occur when VE is passing traffic to other virtual machines running on the same physical hypervisor. Workaround: There are two workarounds: -- Enable TCP Segmentation Offload by modifying 'sys db tm.tcpsegmentationoffload'. -- Disable LRO on hypervisors running VE.|
|412817||The BIG-IP system is unreachable for IPv6 traffic via PCI pass-through interfaces, because current ixgbevf drivers do not support multicast receive. When configured to see IPv6 traffic on a PCI pass-through interface, the BIG-IP guest is not able to see this traffic. PCI pass-through interfaces are unable to see IPv6 traffic. Workaround: None.|
|434713||Licensed bandwidth limit applies to all traffic, including control plane traffic, rather than only load-balancing traffic. As such, bandwidth exceeded message might show up in the VE log file There is significant non-load-balancing traffic passing through the data plane interfaces Load-balancing packets may be dropped resulting in lower throughputs Workaround: None.|
|470238||tmm continuous restart issue when number of cores specified in the in license differs from the number of CPUs on the system. The value of perf_VE_cores in /config/bigip.license is different from the number of CPUs on virtual machine. tmm continuously restarts, and no traffic can be handled. This is a rarely occurring issue. Workaround: Manually set the value of DB variable provision.tmmcount to the value of perf_VE_cores specified in the license. To do so, run the following command: tmsh modify sys db provision.tmmcount _value_.|
|488430||LTM Virtual Edition (VE) does not support the cloud features suspend/save/migration for Community Xen Hypervisor. Community Xen Hypervisor. Reduces migration functionality on Community Xen Hypervisor platform. Workaround: Save the standard configuration in a UCS file and migrate the UCS file to different instances as needed.|
|495523||MCPd goes into a restart loop after a change to the AWS Instance Type. This occurs in Virtual Edition (VE) after changing the underlying instance hardware in AWS, which is not supported behavior. The instance is not usable. There is no error message to indicate the failure. Workaround: Users can save the configuration on the BIG-IP system, instantiate a instance of the desired type, and apply the saved configuration.|
|517454||BIG-IP VE running on Azure cloud cannot report hostname back to Azure Fabric Controller. Hostname is missing in Azure VE's dashboard in Azure portal. If BIG-IP VE runs on Azure cloud. Although the hostname is missing, there is no impact on BIG-IP VE functionality. Workaround: None.|
|524301||BIG-IP VE running on Amazon AWS does not support the jumbo frames with MTU set to 9001. A smaller MSS is suggested in TCP connection's 3-way handshake. BIG-IP VE runs on some Amazon AWS instances with their NICs' MTU set to 9001 by default. Jumbo frame requests are not honored by BIG-IP VE. Workaround: Manually set the NICs' MTU to 9001 after it has been fully started.|
|538010||No support to statically assign the management IP when 1NIC provisioning is enabled on supported VE platforms If Virtual Edition (VE) is provisioned with 1NIC enabled on Amazon AWS or Microsoft Azure public cloud services. The new IP does not take effect. Unable to configure static IP in 1NIC mode. Workaround: In some cases, rebooting the system causes the new IP to take effect. If that does not work for your configuration, use multi-NIC VE configurations.|
|538012||VE 1NIC provisioning shares the same IP address as both the management IP and self IP address, so Virtual Edition (VE)with 1NIC enabled cannot pass any traffic through the data plane if a different self IP address from the DHCP management IP was assigned. #NAME? "The GUI loses its connection. Connectivity is lost until the self-IP address is deleted via ssh/tmsh, or create a virtual server on 443 that points to localhost. Note: This is because creation of a self IP appears to be the trigger that causes uNIC to redirect all 443 traffic to the TMM instead of Linux. However, there is no warning of what will happen, and it is extremely unintuitive." Workaround: Delete the newly created self-IP address to return access, or create a virtual server on 443 that points to localhost. As an alternative, use multi-NIC configurations.|
|550618||Executing 'tmsh load sys config default' returns the following error message: Loading configuration... /defaults/defaults.scf Syntax Error:(/defaults/defaults.scf at line: 97) 'description' may not be specified more than once. User attempts to reset BIG-IP system configuration default parameters when using version 12.0.0 in Azure environments. Cannot reset system configuration defaults. Workaround: Delete the VE instance in Azure, and then start a new instance. Move the registration key to the new instance. Important: F5 Support must release the license (called an 'allow move') to enable license provisioning on a new VE configuration.|
|540766||Cannot upgrade directly to 12.x from 10.x GTM. This is by design. Note: This is true if GTM was ever provisioned on the system, even if it is not currently provisioned. This occurs when upgrading a version 10.x GTM configuration directly to 12.x BIG-IP DNS. Upgrade halts with an error message similar to the following: ERROR: UCS version(v10.2.4) is less than v11.0.0 and GTM module config exists. Upgrade not supported to v12.0.0 or greater versions - exiting installation. See Solution SOL17158. Operation aborted. Workaround: Upgrade 10.x GTM configurations to 11.x GTM, and then upgrade to 12.x BIG-IP DNS.|
There are no known issues specific to Application Security Manager-Virtual Edition.
There are no known issues specific to Access Policy Manager-Virtual Edition.
There are no known issues specific to Application Acceleration Manager-Virtual Edition.
There are no known issues specific to Policy Enforcement Manager-Virtual Edition.
There are no known issues specific to Application Firewall Manager-Virtual Edition.
There are no known issues specific to Analytics-Virtual Edition.
For additional information, please visit http://www.f5.com.
You can find additional support resources and technical documentation through a variety of sources.
Free self-service tools give you 24x7 access to a wealth of knowledge and technical support. Whether it is providing quick answers to questions, training your staff, or handling entire implementations from design to deployment, F5 services teams are ready to ensure that you get the most from your F5 technology.
AskF5 is your storehouse for thousands of solutions to help you manage your F5 products more effectively. Whether you want to search the knowledge base periodically to research a solution, or you need the most recent news about your F5 products, AskF5 is your source.
The F5 DevCentral community helps you get more from F5 products and technologies. You can connect with user groups, learn about the latest F5 tools, and discuss F5 products and technology.