Manual Chapter : Terminology Used in this Implementation

Applies To:

Show Versions Show Versions

F5 SSL Orchestrator

  • 13.0.0
Manual Chapter

Terminology Used in this Implementation

This section defines some of the terms used in this document.

  • Ingress device

    The ingress BIG-IP® system is the device (or Sync-Failover device group) to which each client sends traffic. In the scenario where both ingress and egress traffic are handled by the same BIG-IP® system, ingress refers to the VLAN(s) where the client sends traffic. The ingress BIG-IP® system (or ingress VLAN(s)) decrypts the traffic and then based on protocol, source, destination, and so on, classifies it and passes each connection for inspection based on service chains you will configure (or allows certain connections to bypass service-chain processing based on your selections).

  • Egress device

    The egress BIG-IP® system is the device (or Sync-Failover device group) that receives the traffic after a connection traverses the chosen service chain and then routes it to its final destination. In the scenario where both ingress and egress traffic are handled by the same BIG-IP® system, egress refers to the VLAN(s) where traffic leaves the BIG-IP® system to the Internet.

  • Inline services

    Inline services pass traffic through one or more service (inspection) devices at Layer2 (MAC)/Bump-in-the-wire or Layer3 (IP). Each service device communicates with the ingress BIG-IP® device over two VLANs called Inward and Outwardwhich carry traffic toward the intranet and the Internet respectively. You can configure up to ten inline services, each with multiple defined devices, using SSL Orchestrator.

  • Receive-only services

    Receive-only services refer to services that only receive traffic for inspection, and do not send it back to the BIG-IP® system. Each receive-only service provides a packet-by-packet copy of the traffic (e.g. plaintext) passing through it to an inspection device. You can configure up to ten receive-only services using SSL Orchestrator. For more informationon receive-only services, see Creating receive-only services for traffic inspection.

  • ICAP services

    Each ICAP service uses the ICAP protocol (https://tools.ietf.org/html/rfc3507) to refer HTTP traffic to one or more Content Adaptation device(s) for inspection and possible modification. You can add an ICAP service to any TCP service chain, but only HTTP traffic is sent to it, as we do not support ICAP for other protocols. You can configure up to ten ICAP services using SSL Orchestrator. For more information on ICAP services, see Creating ICAP services.

  • Service chains

    SSL Orchestrator service chains process specific connections based on classifier rules which look at protocol, source and destination addresses, and so on. These service chains can include four types of services (Layer 2 inline services, Layer 3 inline services, receive-only services, and ICAP services) you define, as well as any decrypt zone between separate ingress and egress devices). For more information on service chains, see Creating service chains to link services.

  • Service chain classifier rules

    Each service chain classifier rule chooses ingress connections to be processed by a service chain you configure (different classifier rules may send connections to the same chain). Each classifier rule has four filters.The filters match source (client) IP address, destination (which can be IP address, IP Intelligence category, IP geolocation, domain name, domain URL Filtering category, or server port), and application protocol (based on port or protocol detection). Filters can overlap so the implementation chooses the classifier rule with the most specifc matches for each connection.

    For more information on service chain classifier rules, see Creating TCP service chain classifier rules and/or Creating UDP service chain classifier rules.

  • Decrypt zone

    A decrypt zone refers to the network region between separate ingress and egress BIG-IP® devices where cleartex data is available for inspection. Basically an extra inline service can be placed at the end of every service chain for additional inspection. You cannot configure a decrypt zone in the scenario where a single BIG-IP® system handles both ingress and egress traffic because the decrypt zone does not exist.

  • Transparent/Explicit Proxy

    This implementation can operate in transparent and/or explicit proxy mode. A transparent proxy intercepts normal communication without requiring any special client configuration; clients are unaware of the proxy in the network. In this implementation, the transparent proxy scheme can intercept all types of TLS and TCP traffic. It can also process UDP and forward other types of IP traffic. The explicit proxy scheme supports only HTTP(S) per RFC2616. In addition, transparent proxy supports direct routing for policy-based routing (PBR) and Web Cache Communication Protocol (WCCP) that are dependent on networking services to support both protocols, while explicit proxy supports manual browser settings for proxy auto-config (PAC) and Web Proxy Autodiscovery Protocol (WPAD) that require additional iRule configurations (not included) to provide the PAC/WPAD script content.

  • Certificate Authority (CA) certificate

    This implementation requires a Certificate Authority PKI (public key infrastructure) certificate and matching private key for SSL Forward Proxy. Your TLS clients must trust this CA certificate to sign server certificates.

  • SNAT

    A SNAT (Secure Network Address Translation) is a feature that defines routable alias IP addresses that the BIG-IP® system substitutes for client IP source addresses when making connections to hosts on the external network. A SNAT pool is a pool of translation addresses that you can map to one or more original IP addresses. Translation addresses in a SNAT pool should not be self IP addresses.

  • Sync-Failover device group

    A Sync-Failover device group (part of the Device Service Clustering (DSC®) functionality) contains BIG-IP® devices that synchronize their configuration data and failover to one another when a device becomes unavailable. In this configuration, a Sync-Failover device group supports a maximum of two devices.