The SteelHead (in the cloud) doesn’t use automatic peering. When you run a server in the cloud, you deploy the SteelHead (in the cloud) to be the furthest SteelHead in the network because the Discovery Client on the server is configured to use the SteelHead (in the cloud) automatically. When you run a client in the cloud, and there are multiple SteelHeads in the path to the server, the SteelHead (in the cloud) is selected for optimization first. You can enable automatic peering on the remote SteelHeads to make the SteelHead (in the cloud) peer with the furthest SteelHead in the network.
Appliances in a serial cluster process the peering rules you specify in a spill-over fashion. When the maximum number of TCP connections for a SteelHead is reached, that appliance stops intercepting new connections. This behavior allows the next SteelHead in the cluster the opportunity to intercept the new connection, if it has not reached its maximum number of connections. The in-path peering rules and in-path rules tell the SteelHead in a cluster not to intercept connections between themselves.
Steelhead Appliance Installation and Configuration Guide 1 Preface Welcome to the Steelhead Appliance Installati on and Configuration Guide. Read this preface for an overview of the information provided in this guide and for an understanding of the documentation conventions used throughout. This preface includes the following sections.
Note: For environments that want to optimize MAPI or FTP traffic which require all connections from a client to be optimized by one SteelHead, we strongly recommend using the master and backup redundancy configuration instead of a serial cluster. For larger environments that require multiappliance scalability and high availability, we recommend using the Interceptor to build multiappliance clusters. For details, see the SteelHead Interceptor Deployment Guide and the SteelHead Interceptor User’s Guide. By default, enhanced autodiscovery peering is enabled. Without enhanced autodiscovery, the SteelHead uses regular autodiscovery.
With regular auto-discovery, the SteelHead finds the first remote SteelHead along the connection path of the TCP connection, and optimization occurs there: for example, if you had a deployment with four SteelHeads (A, B, C, D), where D represents the appliance that is furthest from A, the SteelHead automatically finds B, then C, and finally D, and optimization takes place in each. The default peering rule number 1 with the SSL incapable flag matches any SSL connection whose IP address and destination port appear in the list of bypassed clients and servers in the Networking SSL: SSL Main Settings page.
The bypassed list includes the IP addresses and port numbers of SSL servers that the SteelHead is bypassing because it couldn’t match the common name of the server’s certificate with one in its certificate pool. The list also includes servers and clients whose IP address and port combination have experienced an SSL handshake failure. For example, a handshake failure occurs when the SteelHead can’t find the issuer of a server certificate on its list of trusted certificate authorities. Enhanced autodiscovery greatly reduces the complexities and time it takes to deploy SteelHeads. It works so seamlessly that occasionally it has the undesirable effect of peering with SteelHeads on the Internet that aren’t in your organization's management domain or your corporate business unit. When an unknown (or unwanted) SteelHead appears connected to your network, you can create a peering rule to prevent it from peering and remove it from your list of peers. The peering rule defines what to do when a SteelHead receives an autodiscovery probe from the unknown SteelHead.
When a SteelHead transitions from normal operation to fail-to-wire mode, SteelHead circuitry physically moves to electronically connect the SteelHead LAN and WAN ports to each other, and physically disconnects these two ports from the rest of the SteelHead. During the transition to fail-to-wire mode, device linked to the SteelHead momentarily disconnect and then immediately connect again. After the transition, traffic resumes flowing as quickly as the connected devices are able to process it. For example, spanning-tree configuration and routing-protocol configuration influence how quickly traffic resumes flowing. Traffic that was passed‑through is uninterrupted. Traffic that was optimized might be interrupted, depending on the behavior of the application-layer protocols. When connections are restored, the traffic resumes flowing, although without optimization.
Some network interfaces support fail-to-block mode. In fail-to-block mode, if the SteelHead has an internal software failure or power loss, the SteelHead LAN and WAN interfaces power down and stop bridging traffic. Fail-to-block mode is useful only if the network has a routing or switching infrastructure that can automatically divert traffic from the link after the failed SteelHead blocks it. You can use fail-to-block mode with connection forwarding, the allow-failure command, and an additional SteelHead on another path to the WAN to achieve redundancy. In physically in-path deployments, link state propagation (LSP) helps communicate link status between the devices connected to the SteelHead. When this feature is enabled, the link state of each SteelHead LAN and WAN pair is monitored.
If either physical port loses link status, the link of the corresponding physical port is also brought down. Link state propagation allows link failure to quickly propagate through the SteelHead, and it is useful in environments where link status is used as a fast-fail trigger. Using the appropriate cables and interface settings for in-path deployments are vital to performance and resiliency. The physical cabling and interface settings that connect the SteelHead to the LAN and WAN equipment (typically a switch and router) at the site must be correct. Duplex mismatches between the SteelHead and equipment connected to it, either during normal operations or during failures when the SteelHead is in fail-to-wire mode, have a significant impact on the performance of all traffic passing through the SteelHead.
This section includes the following topics. The correct duplex settings to use depend on the capabilities of all of the interfaces in the chain of in-path interfaces: the connected LAN device (typically a switch), the LAN and WAN ports on the SteelHead bypass cards in use by one or more in-path SteelHeads, and the connected device (typically a router). A typical in-path deployment has a SteelHead LAN port connecting to a switch port that is 10/100/1000 Mbps capable, but the SteelHead WAN port connects to a router interface that is only capable of 10/100. In this deployment, manually set both the SteelHead LAN and WAN ports to use 100 Mbps, full duplex. These settings ensure correct operation during normal operation and fail-to-wire mode. For example, an interface mismatch can happen if the LAN interface is connected to a 1-Gbps device, the WAN interface is connected to a 100-Mbps device, and WAN bandwidth is close or equal to 100 Mbps.
To avoid any potential bottleneck that prevents the SteelHead LAN interface from sending or receiving data at a rate greater than 100 Mbps, it is prudent for you to use automatic negotiation on both LAN and WAN interfaces. If you use automatic negotiation, the SteelHead can perform at its best both on the LAN side and the WAN side. Duplex misconfiguration is not limited to the chain of in-path interfaces, especially at remote locations, where WAN limitations restrict the potential performance of applications.
There might be long-standing, unrealized duplex-related errors in the existing LAN infrastructure or even on host interfaces. You might only discover these long-standing issues when a SteelHead is deployed at the site and attention is concentrated on achieving high performance. Due to the infrastructure duplex problems, the SteelHead performance gains are not as significant as expected. Any such infrastructure issues limit the optimization possible by deploying a SteelHead and must be resolved to realize the full benefits of a SteelHead deployment. The SteelHead closest to the LAN is configured as a primary, and the other SteelHead is configured as the backup.
The primary SteelHead optimizes traffic and the backup SteelHead checks to make sure the primary SteelHead is functioning and not in admission control. Admission control means the SteelHead has stopped trying to optimize new connections, due to reaching its TCP connection limit or due to some abnormal condition. If the backup SteelHead cannot reach the primary or if the primary has entered admission control, the backup SteelHead begins optimizing new connections until the primary recovers. After the primary has recovered, the backup SteelHead stops optimizing new connections, but continues to optimize any existing connections that were made while the primary was down.
The recovered primary optimizes any newly formed connections. If you use the primary and backup deployment method with SteelHeads that have multiple active in-path links, peering rules must also be configured. Add peering rules to both SteelHeads for each in-path interface; these peering rules must have an action pass for a peer IP address of each of the in-path IP addresses. This setting ensures that during any window of time during which both SteelHeads are active (for example, during a primary recovery), the SteelHeads do not try to optimize connections between themselves. Important: For environments in which you want to optimize MAPI or FTP traffic, which require all connections from a client to be optimized by one SteelHead, Riverbed strongly recommends using the primary and backup redundancy configuration instead of a serial cluster deployment. For larger environments that require multiple appliance scalability and high availability, Riverbed recommends using the SteelHead Interceptor to build multiple appliance clusters. For details, see the SteelHead Interceptor User’s Guide.
The SteelHead also builds a table of MAC addresses to the LAN or WAN interface, based on the Ethernet frames that cross through the SteelHead in-path interface. Using these tables, the SteelHead learns which destination MAC address to use for packets the SteelHead originates and the corresponding interface on which to transmit the packet. For example, if local hosts are on the same subnet as the SteelHead in-path interfaces and the WAN routers are using HSRP, the transmitting in-path interface learns all local IP addresses on the same subnet to MAC address, through ARP, through the LAN interface. The SteelHead learns remote IP address-to-MAC address relationships from simplified routing = destination-only through the WAN interface. If the SteelHead has not learned the IP address-to-MAC address relationship through simplified routing or ARP, it follows its default gateway.
When the network experiences an outage, the SteelHead in-path interface that transmits for the connection does not react to the change in the network until there has been a change in the flow of traffic through the transmitting in-path interface. After the flow of traffic changes, the SteelHead in-path interface can learn that the destination is available through the opposite interface (WAN now goes to LAN).
For example, using the default simplified routing setting, a SteelHead learns that a remote IP address is associated with the MAC address of the primary WAN router that owns the HSRP virtual MAC. When the primary router WAN circuit fails, the primary router can decrement its HSRP priority and the standby router can preempt the primary router to assume control of the HSRP virtual MAC.
When this happens, the transmitting interface learns the HSRP MAC through the LAN interface and can continue transmitting traffic across the WAN for optimized connections. In certain scenarios, existing optimized connections do not survive: for example, if there is a failure between the SteelHead and a directly connected device, such as the cable is damaged or a device lost power. When a directly connected device fails, the first-hop redundancy protocol such as HSRP detects the failure and a standby device can assume the primary role. However, some features on the SteelHead, such as link state propagation, can also detect the failure and stop connectivity in the associated interface (if the failure affects LAN00 then WAN00 also stops connectivity). The result is that all paths from the in-path interface that transmit for the optimized connection are in a down state and the optimized connections do not continue.
Most applications restart new connections. Link state propagation provides feedback to other devices of the failure, and network protocols can more quickly detect the failure. The choice of a default gateway is very important in locations with multiple WAN routers. In addition to choosing a default gateway (and simplified routing) that minimizes packet ricochet, HSRP or similar protocols can be used to ensure the loss of a single WAN router does not prevent the SteelHead from transmitting packets over the WAN.
Most WAN devices that support HSRP or similar protocols have a link tracking option that allows them to relinquish the HSRP virtual IP address if a WAN link fails; this option should be used when possible. This section describes best practices for parallel SteelHead deployments at locations with multiple routers. Each of the scenarios that follow can be modified to use additional SteelHeads for each path to the WAN, using either the primary and backup or serial cluster configurations. If you are using multiple SteelHeads on each path, every SteelHead at the location must be configured as a connection-forwarding neighbor for every other SteelHead at the location. This section covers the following scenarios. Shows a topology similar to, but each router has two links going back to the Layer‑3 switches totaling four links.
Each link between the router and switch are independent /29 bit networks. You can configure dynamic routing protocols so that traffic can flow inbound or outbound. You can use this network design to prevent the loss of a single Layer-3 switch from cutting connectivity to the attached router. To ensure traffic is routed to the partner SteelHead during a failure, you can use the two in-path interfaces and deploy the SteelHeads in parallel, with connection forwarding, and fail-to-block. A SteelHead can be deployed physically in-path on an 802.1Q trunk link, and it can optimize connections where packets have been tagged with an 802.1Q header. As in other physical in-path deployments, the SteelHeads in-path interface must be configured with an IP address and a default gateway.
If the SteelHeads in-path IP address is in a subnet whose traffic is normally tagged when present on the in-path link, the SteelHead's in-path interface must be configured with the VLAN for that subnet. This configuration allows the SteelHead to appropriately tag packets transmitted from the in-path interface that use the in-path IP address as the source address.
The SteelHead can optimize traffic on the VLANs different from the VLAN containing the in-path IP address. SteelHeads maintain the VLAN ID when transmitting packets for the LAN side of optimized connections. If correct addressing or port transparency is used, the SteelHead uses the configured in-path VLAN ID when transmitting packets towards the WAN.
When using the full address transparency WAN visibility mode (for details, see ), SteelHeads maintain the VLAN ID (along with IP address and TCP ports) when transmitting packets on the WAN side of optimized connections. The SteelHead can maintain the VLAN IDs even if there is a difference between the packets going to the WAN and those returning from the WAN. You do not have to configure the VLAN ID on the in-path interface to match any of those seen for optimized connections. Shows a SteelHead deployed physically in-path on an 802.1Q trunk link.
Two VLANS are present on the in-path link: VLAN 10, which contains subnet 192.168.10.0/24, and VLAN 20, which contains subnet 192.168.20.0/24. The SteelHead is given an in-path IP address in the 192.168.10.0/24 subnet. The SteelHead is configured to use VLAN 10 for its in-path interface and to use the routers subinterface IP address as its default gateway. Even though the SteelHead has an in-path IP address in subnet 192.168.10.0/24 and VLAN 10, it can optimize traffic in VLAN 20.