Manual Chapter : Setting Up an F5 Networks NVGRE Gateway Environment

Applies To:

Show Versions Show Versions

BIG-IQ Cloud

  • 4.5.0, 4.4.0
Manual Chapter

Overview: Setting up an F5 Networks NVGRE gateway environment

This document provides instructions for installing the F5 Networks HNV Gateway PowerShell Module in the System Center Virtual Machine Manager (SCVMM) for integration into a Microsoft Hyper-V environment. The plug-in allows you to use a BIG-IP device or VE to act as a gateway between virtual networks in SCVMM and external networks. That is, with this plug-in, virtual machines can connect to the outside world, using NVGRE tunnels. After you have made this connection, you can configure the BIG-IP systems to provide application services for those virtual machines. The BIG-IQ system is the management endpoint for the BIG-IP systems. By default, all communication from the F5 Networks HNV Gateway PowerShell Module occurs through the BIG-IQ system.

About network virtualization using generic routing (NVGRE)

Using generic routing encapsulation (GRE) for policy-based, software-controlled network virtualization supports multitenancy in public and private clouds. NVGRE encapsulates Ethernet frames in an NVGRE-formatted GRE packet. You can combine virtual network segments managed by NVGRE with segments managed by VXLAN in either or both multicast and unicast modes.

NVGRE serves most data centers deploying network virtualization. The system encapsulates packets inside another packet, and the header of the new packet has the appropriate source and destination provider address (PA) IP address in addition to the virtual subnet ID (VSID), which is stored in the Key field of the GRE header. The VSID allows hosts to identify the customer's virtual machines for any given packet.

NVGRE is a policy-driven solution, so the provider addresses (PAs) and customer addresses (CAs) on the packets can overlap without problems. Consequently, all virtual machines on the same host can share a single PA.

These concepts are important for deploying NVGRE with Microsoft System Center Virtual Machine Manager (SCVMM):

  • Customer address (CA)
  • Provider address (PA)
  • Virtual subnets
  • Routing domains
  • Logical networks
  • IP pools for each logical network site
  • Logical switches with port profiles
  • Virtual port profiles
  • VM networks

For additional information about network virtualization concepts, you can consult Microsoft documentation, for example: http://blogs.msdn.com/b/microsoft_press/archive/2014/03/24/ free-ebook-microsoft-system-center-network-virtualization-and-cloud-computing.aspx

About customer addresses

In NVGRE deployments with System Center Virtual Machine Manager (SCVMM), the customer address (CA) is the IP address assigned by the customer or tenant, based on the subnet, IP address range, and network topology. This IP address is visible only to the virtual machine and, eventually, other virtual machines within the same subnet VM network, if you allow routing.

In this example, ERICVM1 is a virtual machine currently running on Hyper-V host MTCPARIS-2. Its IPv4 address 192.168.0.2 is visible only to this virtual machine, and not to the underlying network fabric.

Screen snippet showing VM visibility of customer address Screen snippet showing visibility of customer address to VM

You can double-check this concept by connecting directly to the virtual machine, as in this example.

Command line verification of customer address visibility to VM Command line verification of customer address visibility to VM

About provider addresses

In NVGRE deployments with System Center Virtual Machine Manager (SCVMM), the provider address (PA) is the IP address assigned by the administrator or by SCVMM, based on the physical network infrastructure. This IP address, visible only on the physical network, is used when Hyper-V hosts (either standalone or clustered) and other devices are exchanging packets, when participating in network virtualization.

This example shows the virtual machines running on the Hyper-V host MTCPARIS-2, which includes the ERICVM1 virtual machine. The PA associated with the ERICVM1 virtual machine is 10.10.0.5, which is never visible to the ERICVM1 virtual machine itself.

Example of provider address for VM participating in network virtualization Example of provider address for VM participating in network virtualization

About virtual subnets

In NVGRE deployments with System Center Virtual Machine Manager (SCVMM), a unique virtual subnet ID (VSID) identifies an IP subnet at Layer 3 and a broadcast domain boundary at Layer 2, similar to VLAN technology. The VSID must be unique within the data center and within the range of 4096 to 2^24-2. Two customers in a hosted data center cannot both use the same VSID, even if they have different routing domains.

The VSID is a setting of the port of the virtual switch (vSwitch). However, it is presented to you as a property of the virtual network interface (VNI) of a VM.

In this example, the VSID for the ERICVM1 virtual machine is 7829576.

Example including VSID for a virtual machine Example including VSID for a virtual machine

About routing domains

In NVGRE deployments with System Center Virtual Machine Manager (SCVMM), a routing domain defines a relationship between the virtual subnets created by the tenants, and identifies the VM network.

  • The routing domain ID (RDID) has a globally unique ID (GUID) within the data center.
  • The network vitualization stack enables Layer 3 routing between the subnets with a default gateway (always x.x.x.1), which cannot be disabled or configured.
  • Hyper-V network virtualization (HNV) addresses distribute Layer-3 routing between virtualized subnets by including a network virtualization routing extension natively inside Hyper-V virtual switches running on each Hyper-V host.
  • This distributed router can make cross-subnet routing decisions locally within the vSwitch to directly forward traffic between VMs on different virtualized subnets within the same virtual network or routing domain.
  • To manage and distribute the appropriate routing policies to each Hyper-V host, System Center 2012 R2 VMM performs as the routing policy server, enabling the configuration of distributed routers across many Hyper-V hosts to be easily coordinated from a single, centralized point of administration.

This example shows two different routing domains on the same Hyper-V host.

Example of two routing domains on a single Hyper-V host Example of two routing domains on a single Hyper-V host

About logical networks

In NVGRE deployments with System Center Virtual Machine Manager (SCVMM), a logical network can contain one or more associated network sites. A network site is a user-defined named grouping of IP subnets, VLANs, or IP subnet and VLAN pairs, which is used to organize and simplify network assignments. Logical networks are useful in large environments for mapping and streamlining network connectivity and dependencies in the configuration.

Uses for logical networks include but are not limited to these:

  • Management: Contains the IP subnet used for management. Typically, both VMM and the Hyper-V servers are connected to this physical network. If you have more than one site and/or several VLANS, you can add all of these to the same logical network.
  • Cluster: Contains the IP subnet and VLAN for cluster communication. Live Migration
  • Front end: Contains the IP subnet used for public IP addresses.
  • PA network: Contains the IP subnet used for provider addresses.

The logical network is dedicated to network virtualization. It is enabled at the logical network level. This network must be isolated. Do not use any of the other networks for this puspose.

The logical network in this example has an associated IP pool, so that SCVMM can manage IP address assignments to the hosts dedicated to network virtualization, the virtualization gateway VMs, and the virtualization hosts running virtual machines connected to VM networks.

Specifying a logical network Specifying a logical network
Adding network sites to a logical network Adding network sites to a logical network

About IP address pools

In NVGRE deployments with System Center Virtual Machine Manager (SCVMM), You must have IP address pools for each logical network site, so that VMM can assign the right IP configuration to its resources within this network.

In these configuration screen examples, note that there is no direct mapping of the PA network to the hosts. The PA network is available to the hosts only through this configuration, together with Uplink port profiles and logical switches.

Important: Do not configure network virtualization on any other logical networks that you present to the same hosts.
Specifying the IP address pool name and logical network Specifying the IP address pool name and logical network
Specifying a network site and the IP subnet Specifying a network site and the IP subnet
Specifying the range of IP addresses for a pool Specifying the range of IP addresses for a pool

About logical switches with port profiles

In NVGRE deployments with System Center Virtual Machine Manager (SCVMM), you can use port profiles and logical switches to create identical capabilities for network adapters across multiple hosts. Port profiles and logical switches act as containers for the properties or capabilities that you want your network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles and logical switches, and then apply these capabilities to the appropriate adapters. This can simplify the configuration process and ensure that your hosts are using correct load balancing algorithm and the virtual adapters have the right settings related to capabilities and QoS.

About virtual port profiles

In NVGRE deployments with System Center Virtual Machine Manager (SCVMM), you can take advantage of several port profiles that are shipped with SCVMM and use the existing profiles for host management, cluster, and live migration.

You can see the profiles in SCVMM by navigating to Port Profiles on the networking tab in Fabric.

For example, on the Security Settings screen, you can enable Allow guest specified IP addresses, so that VMM can detect changes made to tenants within the guests, and update the NVGRE policy in the environment.

Example of setting security for a virtual port profile Setting security for a virtual port profile

About VM networks

Note: This step is not necessary if, when you created the logical networks, you selected Create a VM network with the same name to allow virtual machines to access this logical network directly.

You need to create VM networks with 1:1 mapping to your logical networks in the fabric.

Creating a VM network Creating a VM network
Specifying VM subnets Specifying VM subnets
Adding external connectivity to a VM network Adding external connectivity to a VM network

Before you begin the installation

Before you install the F5 Networks HNV Gateway PowerShell Module, you need to prepare the BIG-IP system. F5 Networks strongly recommends using Engineering Hotfix 121.14 for v11.5.1-HF2. It includes bug fixes that ensure that monitors on VIPs work correctly in an HA setup that also uses tunnels.

Make sure that the BIG-IP system is configured with these considerations in mind.

  • At least one BIG-IP system is configured with at least one IP address, preferably on the management interface.
  • Verify that at least one VLAN has connectivity to the provider network.

When you configure the SCVMM gateway using a config file, you specify the following BIG-IQ parameters:

  • The IP address of the F5 BIG-IQ system
  • The name of a BIG-IQ device resolver group that contains either one standalone BIG-IP system or two BIG-IP systems in a device cluster

For a standalone BIG-IP system

If you are setting up a standalone BIG-IP system, verify that you have not configured a masquerading MAC address.

For a pair of BIG-IP systems in a device group

If you are setting up a pair of BIG-IP devices as a device group, verify the following.

  • You are using a Sync-Failover device group.
  • Auto-Sync and Network Failover are turned on for the device group.
  • You have configured a masquerading MAC address.
  • Make a note of the traffic group used for floating objects; you need to provide it in the configuration file.

When you are not using route domains

When you do not use route domains; that is, when UseRouteDomains is set to false in the F5 Networks HNV Gateway PowerShell Module configuration file, you must create a forwarding virtual server on each of your BIG-IP systems.

Here is an example using the tmsh command line utility.

create ltm virtual scvmm-vs destination 0.0.0.0:0 mask any ip-forward source-address-translation { type automap }

Additional information

The following files are shipped with the plug-in.

  • Plug-in binaries: C:\Windows\System32\WindowsPowerShell\v1.0\Modules\F5GatewayProvider
  • Sample configuration file for one BIG-IP system:C:\Windows\System32\WindowsPowerShell\v1.0\Modules\F5GatewayProvider\gateway-one-bigip.cfg
  • Sample configuration file for two BIG-IP systems in a redundant (HA) pair:C:\Windows\System32\WindowsPowerShell\v1.0\Modules\F5GatewayProvider\gateway-bigip-ha-pair.cfg
  • Script to create a BIG-IQ device resolver group:C:\Windows\System32\WindowsPowerShell\v1.0\Modules\F5GatewayProvider\Setup-Device-Group.ps1
  • Log file:C:\Program Files\Microsoft System Center 2012 R2\Virtual Machine Manager\bin\F5-SCVMM-Gateway.log

Task summary

Before you start this installation, you need to acquire the file F5GatewayPowerShellSetup.msi, which you can find on the BIG-IQ system. You must use an account with administrative privileges to complete the installation. Ensure that the BIG-IP system includes a local IP address in the provider IP address space, and make a note of this address. Also, ensure that the network interface (NIC) to be used for the provider addresses is named WNVNIC.

Task list

Creating the BIG-IQ device resolver group

When you configure The F5 Networks HNV Gateway PowerShell Module, you need to create a BIG-IQ device resolver group for each gateway you configure.
  1. Locate the setup script file C:\Windows\System32\WindowsPowerShell\v1.0\Modules\F5GatewayProvider\Setup-Device-Group.ps1.
  2. Run the file. The setup script takes the parameters shown in the table. If you do not specify the credentials as command-line arguments, the system prompts you for them.
    Parameter Required? Description
    -BigIqAddress Yes IP address of the BIG-IQ system. This should match the setting BigIqAddress in the configuration file.
    -BigIqCred No Credential for the admin account on the BIG-IQ system. (This is a PSCredential, as are the other credentials.)
    -Force No If the group exists, force it to be recreated.
    -ActiveBigIp Yes The IP address of the active BIG-IP system
    -StandbyBigIp No The IP address of the standby BIG-IP system, if used
    -GroupName Yes The name of the group to create. This should match the setting BigIqDeviceGroup in the configuration file.
    -ActiveBigIpAdminCred No Credential for the admin account of the active BIG-IP system.
    -ActiveBigIpRootCred No Credential for the root account of the active BIG-IP system.
    -StandbyBigIpAdminCred No Credential for the admin account of the standby BIG-IP system.
    -StandbyBigIpRootCred No Credential for the root account of the standby BIG-IP system.

Installing the F5 Networks HNV Gateway PowerShell Module

The F5 Networks HNV Gateway PowerShell Module provides a setup wizard to install the BIG-IP system as an NVGRE gateway for System Center Virtual Machine Manager (SCVMM). After the installation is complete, you must restart the SCVMM services, or reboot the SCVMM server.
  1. Run the file F5GatewayPowerShellSetup.msi. F5 Networks Gateway PowerShell Module Setup Wizard
  2. Click Next.
  3. Accept the EULA, and for the Installation Type, select Complete.
  4. Configure the F5 Networks Gateway.
    1. Open the sample configuration file that corresponds to your setup, either a standalone BIG-IP system (gateway-one-bigip.cfg) or a BIG-IP HA pair (gateway-bigip-ha-pair.cfg, at this location. {SYSDIR}\WindowsPowerShell\v1.0\Modules\F5GatewayProvider For default installations, {SYSDIR} is C:\Windows\System 32.
    2. Edit the file, as indicated. <GatewaySettings> <BigIqAddress>0.0.0.0</BigIqAddress> <BigIqDeviceGroup>scvmm</BigIqDeviceGroup> <ActiveProviderAddress></ActiveProviderAddress> [BIG-IP HA pair only] <StandbyProviderAddress></StandbyProviderAddress> [BIG-IP HA pair only] <ProviderFwEnforcedPolicy></ProviderFwEnforcedPolicy> <ProviderFwStagedPolicy></ProviderFwStagedPolicy> <FloatingTrafficGroup>traffic-group-1</FloatingTrafficGroup> [BIG-IP HA pair only] <UseRouteDomains>true</UseRouteDomains> <RouteDomainRange first="1" last="500"/> <UseInboundTunnelMode>false</UseInboundTunnelMode> <CreateForwardingVirtual>true</CreateForwardingVirtual> <ForwardingVirtualSNAT>automap</ForwardingVirtualSNAT> <CustomerSelfIpAllow>all</CustomerSelfIpAllow> <CustomerSelfFwStagedPolicy></CustomerSelfFwStagedPolicy> <CustomerSelfFwEnforcedPolicy></CustomerSelfFwEnforcedPolicy> <TunnelMtu>0</TunnelMtu> <TunnelProfile>nvgre</TunnelProfile> <DumpGatewayState>true</DumpGatewayState> </GatewaySettings> Considerations for these settings:
      • The setting BigIqAddress is the management IP address of the BIG-IQ system.
      • For a standalone system, BigIqDeviceGroup is the name of a BIG-IQ device group containing a single BIG-IP system. This is the name of the BIG-IQ device resolver group that you created previously.
      • For a BIG-IP HA pair, BigIqDeviceGroup is the name of a BIG-IQ device group that contains two BIG-IP systems: one active, one standby. This is the name of the BIG-IQ device resolver group that you created previously.
      • If you create more than one gateway instance on a pair of BIG-IP systems, you must ensure that each of them uses the same non-floating provider address. To do this reliably, you pre-create the provider IP address and specify it in the ActiveProviderAddress and StandbyProviderAddress settings. Note that you can attach a single gateway to multiple virtual networks without needing this setting. It is applicable only when using a device cluster, not a standalone BIG-IP system.
      • If you have not provisioned Advanced Firewall Manager AFM, or you do not want to use AFM on provider self IP addresses, leave empty the values for ProviderFwEnforcedPolicy and ProviderFwStagedPolicy, which specify the enforced firewall and staged firewall policies for the provider. If you specify a policy name, it is your responsibility to ensure that the policy is created before it is needed.
      • For a BIG-IP HA pair, FloatingTrafficGroup is the name of the traffic group to use for objects that are synced between BIG-IP devices in the device cluster.
      • If you might ever have two different VM networks that contain the same IP address, set UseRouteDomains to true. If you are not sure, retain the setting true. If you are sure that you will never reuse IP addresses in different VM networks, you could set this value to false to avoid a small scalability hit in configuring the BIG-IP system.
      • The setting RouteDomainRange is relevant only if UseRouteDomains is set to true. It specifies the range of route domains that can be used by this gateway on the BIG-IP systems. If you create multiple gateways on the same set of BIG-IP systems (common in an active-active gateway setup), this setting ensures that each gateway does not use route domains belonging to other gateways that share the same BIG-IP device. Make sure that for each gateway that shares a set of BIG-IP devices, you specify a unique range of route domains. Gateways on different BIG-IP devices can use the same range of route domains. Note that this range is inclusive; all the route domains from first to last are used.
      • If you are using a standard BIG-IP system running software v11.5.1 or earlier, set the UseInboundTunnelWorkaround to false. If you are using the engineering hotfix that makes the NVGRE tunnel pair look like a single tunnel, set UseInboundTunnelWorkaround to true. If you are not sure, ask F5 Networks support, or set it to false. Although both values should work, there are advantages to setting it to true, when available.
      • If CreateForwardingVirtual is set to true, when a gateway is added to a customer virtual subnet, a forwarding virtual server is added. Note that this happens only if UseRouteDomains is also set to true.
      • The ForwardingVirtualSNAT setting specifies how you want to handle SNAT on the forwarding virtual, if you requested one. The possible values are none and automap. If you are not sure, retain the setting automap.
      • The CustomerSelfIpAllow setting specifies which traffic is allowed when the plug-in creates a self IP address on a customer's virtual subnet. The possible values are all, none, and default. Custom values are not allowed.
      • If you have not provisioned Advanced Firewall Manager AFM, or you do not want to use AFM on customer self IP addresses, leave empty the values for CustomerSelfFwStagedPolicy and CustomerSelfFwEnforcedPolicy, which specify the staged firewall and enforced firewall policies for the customer. If you specify a policy name, it is your responsibility to ensure that the policy is created before it is needed.
      • If you need to set the MTU on the NVGRE tunnels, specify a value for the TunnelMtu setting. If you retain the value 0, the system automatically sets the MTU.
      • Typically, you do not change the TunnelProfile setting. However, if you need to change it to an alternate NVGRE tunnel profile, you can set it using this parameter. The tunnel must be an NVGRE tunnel that already exists on the BIG-IP system.
      • If the setting DumpGatewayState is true, the system creates a file in the same directory as the log file, named F5-Gateway-CONFIGFILENAME-state.txt. The file contains information about the mapping of SCVMM routing domains to route domains, and the set of VSIDs managed within each routing domain. Although it takes some time to generate this file, it is useful if you are building a BIG-IP configuration on top of the configuration generated by the plug-in. It might be less useful if UseRouteDomains is false.
      • The BIG-IP partition setting is currently ignored.
    3. Save or copy the file to the same location. When you create the VM gateway, you need to specify the name of this file as the network service connection string.
  5. After the installation has completed, restart the SCVMM. Screen capture showing SCVMM restart

Configuring the VM gateway BIG-IP system to forward packets

After you add the BIG-IP system as a VM gateway, you need to configure the system to forward packets.
  1. Create an external VLAN on the BIG-IP system that has external access.
  2. Create a default route on the BIG-IP system that directs traffic outward.

Configuring the F5 gateway in SCVMM

Before starting this task, you must install and load the BIG-IP F5 Networks HNV Gateway PowerShell Module.
After you install the gateway, you can configure the VM network to use the gateway.
  1. In the VMM's Settings area, create a new Run As account. This account includes the user name and password of the BIG-IP system you are using. It does not use domain credentials. You will select this Run As account later in the configuration process. If you already have an appropriate Run As account, you do not need to create another. Multiple gateways can refer to the same Run As account. Creating a Run As account
  2. In the Fabric portion of the user interface, click Add Resources, and select Network Service, as shown. Adding a network service to the fabric
  3. Type a name for the gateway, as shown, and then click Next. Assigning a name to the gateway
  4. For the Manufacturer, select F5 Networks, Inc., and for the Model, select BIG-IP, as shown, and then click Next. Selecting a manufacturer and model
  5. Select the Run As account you created previously, and then click OK. Selecting a Run As account
  6. Specify the connection string for the gateway.

    This is the name of the configuration file you edited and saved.

    Specifying a connection string for the gateway

  7. Click Test to test the gateway.

    SCVMM exercises the gateway functions, as shown in this example. Only some of the network service functionality will be implemented.

    Testing the gateway functionality

  8. Select a host group for the plug-in, as shown. Selecting a host group
  9. Confirm the settings you configured, as shown in this example, and then click Finish. Summary of settings The SCVMM performs an initial configuration of the BIG-IP system. This might take a few seconds.
  10. Right-click the gateway you created, and select Properties, as shown. Gateway properties
  11. In the left navigation pane, select Connectivity, and specify the front end and back end SCVMM interfaces on the BIG-IP system, as shown, and then click OK.

    Although the front end connection does not matter to the gateway plug-in, you must select one. For the back end connection, select the BIG-IP VLAN that has connectivity on the SCVMM provider network. Note that the BIG-IP system will be configured with a self IP address on this network, using a dynamically allocated address.

    Network connectivity

    The gateway you created is now ready to use.
  12. In the VM Network tab, right-click the VM network on which you want the BIG-IP system to provide gateway services, and select Properties, as shown. VM network selection and properties
  13. In the left navigation pane, select Connectivity, select the check box Connect directly to an additional logical network, and then select the gateway you created, as shown. Adding the gateway
  14. When you have finished, click OK. It might take a few seconds for completion of the BIG-IP configuration.

Viewing F5 Networks HNV Gateway PowerShell Module logs

To view logs for the F5 Networks HNV Gateway PowerShell Module, navigate to

C:\Program Files\Microsoft System Center 2012 R2\Virtual Machine Manager\bin\F5_SCVMM_Gateway.Log

Example of F5 Networks NVGRE gateway environment

This illustration is an example of a configured F5 Networks NVGRE gateway environment.

Example of F5 Networks NVGRE gateway environment Example of F5 Networks NVGRE Gateway environment