Applies To:

Show Versions Show Versions

Manual Chapter: Troubleshooting Application Connector Service Center
Manual Chapter
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Troubleshooting common problems

These are possible solutions to problems that might occur when configuring or using Application Connector.

Symptom Possible solution Comments
I cannot find the user interface to install the Service Center. Verify that the iApps® LX license is enabled and the /var/config/rest/apps/enable file exists on your system. Without this license, nothing related to iApps LX will display.
I am trying to install Service Center, but it fails. Make sure that you have sufficient disk space in /var. The installation RPM file is approximately 11MB, and requires 33MB minimum in the /var partition. When you download a new RPM file, the older ones are not discarded, so it is not difficult to run low on /var disk space.
I cannot connect to the Service Center from the Proxy. Try pinging the virtual address from the proxy, before you try anything else. If you can ping the virtual address, verify that the virtual server you configured to connect to the Service Center has the server_traffic_director iRules® LX associated with it using this command sequence: tmsh list ltm virtual application_connector_virtual
ltm virtual application_connector_virtual {
	destination 10.1.10.200:https
    ip-protocol tcp
    mask 255.255.255.255
    pool ac_dummy_pool
    profiles {
        clientssl {
            context clientside
        }
        http { }
        tcp { }
    }
    rules {
        application_connector_plugin/server_traffic_director 
    }
    source 0.0.0.0/0
    translate-address enabled
    translate-port enabled
    vs-index 10
}
There are some scenarios that could occur where you could misplace or accidentally remove the iRule. Check the tmsh output for these scenarios:
  • If the rules parameter is non-existent, in Service Center, click Config, click Proxy Virtual Server, and select the virtual server.
  • If the rules parameter lists "client_traffic_director" and not "server_traffic_director," in Service Center, click Config, click Proxy Virtual Server, and select the virtual server.
Check that the virtual server is listening on port 443. The websocket uses Web Services Security (WSS), which requires TLS, therefore you need to configure a clientssl profile on the proxy virtual server, or the proxy will not be able to connect to it.  
On the BIG-IP® system, review the /var/log/restnoded/app_conn.log file to see if it is receiving inbound requests.  
The BIG-IP virtual server is not passing traffic to the cloud nodes. Check the pool member status of the cloud pool members.

Pool member status should be green (up) or blue (unknown), not red (down), not gray (disabled), not black (forced offline). If the pool member is red, check that the built-in monitor types such as icmp or http are not being used, as these do not work with Application Connector.

If the pool member is marked as gray (disabled) or black (forced offline), you can re-enable the node from the Application Connector Proxy.

 
Make sure that the BIG-IP system has a default gateway configured.  
See if you get a 403 Failed to Establish Connection error. Double check the proxy virtual server configuration, then make sure that you have an http profile and clientssl profile configured on the virtual server.  
I try to add nodes to the proxy but get a BIG-IP iControl REST error. Verify that the IP address of the node is truly unique. Some other reasons for being unable to add a node include if the instanceID is already associated with an existing node, if the IP is already associated with an existing node, or if the name is associated with an existing node. BIG-IP software has similar rules for not allowing the same node to be added twice with the same IP or name. If the node is a duplicate, you will get a 409 error.  
Make sure that the cloud node already exists in the BIG-IP system configuration. Things can go wrong with the Application Connector Service Center, and it might not clean itself up completely, leaving cloud nodes behind in the config. Think of Service Center as a fancy node-adder: the proxy tells it what nodes exist, and Service Center adds them as regular nodes to bigip.conf. If Service Center later has a problem, for example, it crashes, or you delete it and reinstall it, or some other problem or bug causes it to lose connection permanently with the proxies, these nodes might be left behind in the configuration. The nodes are invalid and need to be removed before you try to add them again using the proxy.
Service Center treats different nodes as identical when I add them from two proxies. See if the nodes have the same IP address. If you are configuring two proxies from two separate clouds, you might think you can have nodes with identical IP addresses in each of the clouds, as long as the proxy ID, instance ID, and virtual private cloud (VPC) ID are different. The BIG-IP system will let you think this too, but the architecture of a BIG-IP system is such that a node is an IP address, port, and route domain.

Route domains are not supported in version 1.0.0, so it is currently not possible to have multiple VPCs containing nodes with identical IP addresses.

I cannot configure more than one service center. Application Connector Service Center version 1.0.0 does not support using more than one service center. Verify that there is only one application_connector_plugin object in bigip.conf:
[root@bigip1:Active:Standalone] config # tmsh list ilx plugin
ilx plugin application_connector_plugin {
 disk-space 32872
 extensions {
 application_connector_ext {
 concurrency-mode single
 }
 }
 from-workspace application_connector
 staged-directory /var/ilx/workspaces/Common/application_connector
}
This version of Application Connector Service Center has these limitations:
  • You can have only one traffic group.
  • You can have only one service center.
  • You cannot use partitions, except the Common partition only.

    You can deploy more than one service center, but it will not work right: one will be up and one will be down, and the down one will have the configuration of the up one.

Traffic is passing but pool member statistics reads 0. Application Connector v1.0.0 does not have per-member statistics.

You can see aggregated traffic statistics in the Application Connector Service Center, but per-member statistics is not implemented. In the LTM® statistics you will see zero (0), because all of the network traffic is going through the iRule, not the pool member.

You can see an approximation of per-member connection statistics by running these commands on the command line of your BIG-IP device:

  • tmsh show ltm pool <pool-member<, where <pool-member< is the name of a specific pool member
  • istats dump
 
When my BIG-IP system failed over, the active device came up with all the nodes marked as Forced Offline. Wait 20-30 seconds for the system to stabilize. There is a coalesce process that happens, and it can take up to a couple minutes to complete.  
There are no persistence records. If you do not see any data when you run the tmsh show ltm persist persist-records command, check that your virtual server has cookie persistence properly configured.  
Pool members are incorrectly marked down by the monitor. The standby unit always reports down for members that are available using Application Connector.  
I cannot uninstall and re-install Service Center, and I keep getting errors. The iApps LX RPM database might be corrupted.

Run these commands on each of your BIG-IP devices to reset the RPM database, and then you can reinstall Service Center:

rm -f /var/config/rest/rpm/__db*
/opt/bin/rpm --rebuilddb --dbpath /var/config/rest/rpm
bigstart stop restjavad restnoded
rm -rf /var/config/rest/storage
rm -rf /var/config/rest/index
bigstart start restjavad restnoded
 
The Application Connector RPM is not syncing to the standby device, and neither is the iApps LX that I deployed. After uploading the RPM on the Active device, make sure that you wait a sufficient amount of time for the RPM to be copied to the standby. On slow disks (such as in a virtual machine (VM) environment), this could take 2-3 minutes to complete.  
Verify that /config/f5-rest-device-id is unique among BIG-IP devices.
rm /config/f5-rest-device-id
bigstart restart restjavad

Then follow the instructions for "I cannot uninstall and re-install Service Center, and I keep getting errors" to delete all queued transactions, and then try to reinstall the RPM.

 

Configure Application Connector in a Sync-Failover device group

Before you set up a high availability configuration with Application Connector, you should already have a working Sync-Failover device group set up that includes the device where you initially installed and configured Application Connector. For more information about setting up a high availability configuration, see BIG-IP Device Service Clustering: Administration.
After you have configured Application Connector on one F5® device, you need to make sure that the configuration is synchronized across the device group.
  1. On each device in the device group, connect using the serial console or by opening an SSH session.
  2. Verify that the devices include the app-connector iApps LX RPM file.
    /opt/bin/rpm -qva -dbpath /var/config/rest/rpm/
    1. If the RPM file is not listed on the other devices, log in to the configured device and sync the latest configuration changes to the other devices in the device group.
      tmsh save sys config
    2. If the RPM file still is not listed on the other devices, manually install the app-connector iApps LX RPM file.
  3. Sync the latest configuration changes to the other devices in the device group.
    tmsh save sys config
  4. On each device in the device group, use the Configuration utility create a new instance of the iApps LX Template.
  5. After a few minutes, verify that the standby device includes the new instance.
The Application Connector Service Center is configured to run in a high availability configuration.

About using iStats for Application Connector

The Application Connector Service Center supports iStats on the number of open connections per pool member. You can view these statistics using the TMOS Shell (tmsh).

For more information about iStats and tmsh, see these references:

  • iStats wiki documentation (devcentral.f5.com/wiki/iRules.iStats.ashx)
  • Traffic Management Shell (tmsh) Reference Guide
Table of Contents   |   << Previous Chapter   |   Next Chapter >>

Was this resource helpful in solving your issue?




NOTE: Please do not provide personal information.



Incorrect answer. Please try again: Please enter the words to the right: Please enter the numbers you hear:

Additional Comments (optional)