the key here is that in different parts of the network, the actual IP address of the public URI is different. diagram below
yes the original public uri is to the IHS server, but the IP address will be to the nearest proxy, regardless of where you are.
we are a global company, and people travel to other locations for meetings and work sessions, taking their laptops with them. When they are in a different location they should not have to change their configuration, even tho the IP address of the closest proxy is different.
the 'easiest' way to implement a transparent proxy is to just change the DNS server to map the same name to the proxy instead of the IHS server. poof everyone is now talking to the proxy (in our HQ location).
but the data traffic still travels across the WAN to most of our users. so we need a second proxy in each office location. (mostly where software builds are done as well). so how to intercept the resolved HQ proxy address in the remote offices and redirect it to the office resident proxy.., we use WCCP for that.
the DNS resolved IP address for the public uri in the remote office is the same as it is in HQ (proxy2), but the routers use WCCP to detect that and change it to the local proxy address. (if the local proxy were to fail, rather than have the request fail, it will flow to the HQ proxy, just won't be as fast)
so the users use the public uri of the RTC (IHS) server, and the DNS servers and routers change the underlying IP depending on physical location. In the austin office it will be to the local office location proxy. in the california office the same, and if someone travels to our headquarters location, the same.
the proxies are configured to the an alternate dns name for the cache peer stmt
so the rtc user in austin talks to proxy1, and proxy1 talks to proxy2 in HQ, and proxy2 talks to IHS which talks to websphere.
the rtc user in california talks to proxy9, proxy9 talks to proxy2 in HQ, etc..
if either user is in the HQ location, their RTC usages talks to proxy2 without their knowledge.. (DNS resolution).
proxy2's job is to keep repetitive extracts from impacting the RTC server instance.
proxy1 (and proxy9)'s job is to provide as close to local lan speeds for users as possible (and to keep repetitive traffic off the WAN altogether)
pre-proxy:
rtc1.server.domain = 192.168.1.100
post proxy :
rtc1.server.domain = (HQ) 192.168.1.200, (Austin) 192.168.2.200, (california) 192.168.3.200, etc.
austin & california proxy servers have cachepeer = rtc1.proxy.server.domain (192.168.1.200)
HQ proxy (rtc1.proxy.server.domain) has cachepeer = rtc1.server.direct.domain (192.168.1.100)
the user still uses rtc1.server.domain in the web UI, the eclipse and build configs.
the traffic just goes somewhere else.
the DNS servers still return ONE IP address. rtc1.server.domain = 192.168.1.100 (old), 192.168.1.200 (new)
and the remote routers intercept and and change it to the local address.
so overnight everyone will start using the proxies without knowing.
so all the source code, javascript, and static web text (headings, etc) get pulled to a lan speed server as close to the consumer as possible. We don't send repetitive stuff over the WAN more than once (and the WAN accelerators on the wire reduce duplicate bit patterns 40-95%), we also don't extract if from the RTC database more than once.
the main office proxies are in place, and the remote office proxy machines are in transit to the high traffic offices as we speak.
everything inside the blue circle is inside our HQ location where RTC is located
note: before the jumpstart team wants to choke me!, this is an extreme solution, for our environment, you could get 90% of the server benefit with just proxies at the HQ location.
and 90% of the speed benefit with proxies just at the remote locations( but we need a proxy at the HQ location anyhow as if it was a remote location), so we might as well piggyback and get the best of both.
this tiered proxy architecture has been used on the web forever, so this is not invention.
before proxies
with proxies
we have 5 RTC servers(for capacity), so right now we have 5 master proxies in the HQ location, 1 for each. when we decided on our public URI on RTC 2, we didn't anticipate the proxies, so its hard to use one proxy for multiple servers.. this is one thing we hope we can resolve with server rename after we upgrade from 3.0.1.1 to 4.x later this year. (I want multiple load balanced identical proxy servers, with ICP to keep them in synch, then we don't run the risk of overloading one proxies cpu capacity)
(and if there was project move support, we think we could consolidate the RTC servers to a single instance, with clustering for failover, after removing the redundant file extracts and the wasted RSS feed demand (enhancement in 4.0.3), which account for >80% of the cpu load on server and DB)
Comments
we see a small gain for web based content, mostly Plan views, which load a lot of javascript and gui elements.
Doesn't help eclipse at all for non-SCM data.