State of Montana’s use case would require having three or more ethernet interfaces available on customer edge sites/clusters to accommodate desperate routing domains or VRFs. Current customer setup includes the SLO interface dedicated to a DMZ network that is a separate VRF from the corporate network. This network is BGP routed. This network is required for access to applications by third parties (not State of Montana employees or the general public, but counties and city governments).
The SLI interface is configured to connect the corporate network VRF. This network provides the IP address space for VIPs to be assigned for state employee internal application access. This network is also routed via BGP. The third “unconfigured” network is where origin servers are located. This network is not BGP routed or accessed by employees.
This hub and spoke approach is how the customer uses their existing BigIP implementation. Having the ability to add virtual interfaces on their f5 appliances is a mandatory requirement for current operation. When there is a requirement to communicate to a network that cannot be routed from an existing interface by static routes or BGP, another interface would be created. I have a UCS file from the customer from their existing f5 environment.
Some additional notes on the customer: Fleet configuration is being used to accommodate the BGP setup. Sub interfaces are not able to be used because of a limitation of how communication routes through the original interface gateway. This is a VMWare CE implementation. From testing, we can see the additional interface configured in XC and VMWare, but no traffic is passed or observed