Okay, so you understand webfarms now. What's magic that actually distributes load, and how does it determine how distribution is handled?
At ORCS Web we use Foundry Server Iron products to perform our webfarm load-balancing. If one of them fails, other instantly takes over (In our testing, it had sub-second fail-over!)
So what is this "Server Iron" thing? In simplest terms, it's a layer 4-7 switch. It has multiple network ports on it and can be used literally like other types of switches. But, it can also load-balancing and traffic distribution. A VIP (virtual IP) can be assigned to SI (Server Iron) and it then handles all traffic sent to that address/VIP. Further configuration is done to tell SI what to actually do with traffic sent to VIP address.
The traffic that hits VIP on Server Iron is of course redistributed to a number of server nodes so client request can be satisfied - that's whole point of a webfarm. If one or more server nodes are not responding, switches are able to detect this and send all new requests to servers that are still online - making failure of a server node almost transparent to client.
The traffic can be distributed based on a couple different logic algorithms. The most common are:
* Round Robin: The switches send requests to each server in rotation, regardless of how many connections each server has or how fast it may reply.
* Fastest response: The switches select server node with fastest response time and sends new connection requests to that server.
* Least connections: The switches send traffic to whichever server node shows as having fewest active connections.
* Active-passive: This is called Local/Remote on a Foundry switch, but is still basically active/passive. This allows one or more servers to be designated as "local" which marks them as primary for all traffic. This is combined with another method above to determine what order "local" server nodes have requests sent to them. If a situation were to arise that all "local" (active) server nodes were down, then traffic would be sent to "remote" server nodes. Note that "remote" in this case doesn't really have to mean remote - "remote" server could be sitting right next to "local" servers but it is marked as remote in configuration to let it operate as a hot-standby server. This setting can also be used in a true remote situation where there are servers in a different physical data center - perhaps for extreme disaster recovery situations.