Webfarms II: Balancing The Load.

Written by Brad Kingsley

Okay, so you understand webfarms now. What'srepparttar magic that actually distributesrepparttar 134344 load, and how does it determine howrepparttar 134345 distribution is handled?

At ORCS Web we userepparttar 134346 Foundry Server Iron products to perform our webfarm load-balancing. If one of them fails,repparttar 134347 other instantly takes over (In our testing, it had sub-second fail-over!)

So what is this "Server Iron" thing? In simplest terms, it's a layer 4-7 switch. It has multiple network ports on it and can be used literally like other types of switches. But, it can also load-balancing and traffic distribution. A VIP (virtual IP) can be assigned torepparttar 134348 SI (Server Iron) and it then handles all traffic sent to that address/VIP. Further configuration is done to tellrepparttar 134349 SI what to actually do withrepparttar 134350 traffic sent torepparttar 134351 VIP address.

The traffic that hitsrepparttar 134352 VIP onrepparttar 134353 Server Iron is of course redistributed to a number of server nodes sorepparttar 134354 client request can be satisfied - that'srepparttar 134355 whole point of a webfarm. If one or more server nodes are not responding,repparttar 134356 switches are able to detect this and send all new requests to servers that are still online - makingrepparttar 134357 failure of a server node almost transparent torepparttar 134358 client.

The traffic can be distributed based on a couple different logic algorithms. The most common are:

* Round Robin: The switches send requests to each server in rotation, regardless of how many connections each server has or how fast it may reply.

* Fastest response: The switches selectrepparttar 134359 server node withrepparttar 134360 fastest response time and sends new connection requests to that server.

* Least connections: The switches send traffic to whichever server node shows as havingrepparttar 134361 fewest active connections.

* Active-passive: This is called Local/Remote on a Foundry switch, but is still basically active/passive. This allows one or more servers to be designated as "local" which marks them as primary for all traffic. This is combined with another method above to determine what orderrepparttar 134362 "local" server nodes have requests sent to them. If a situation were to arise that allrepparttar 134363 "local" (active) server nodes were down, then traffic would be sent torepparttar 134364 "remote" server nodes. Note that "remote" in this case doesn't really have to mean remote -repparttar 134365 "remote" server could be sitting right next torepparttar 134366 "local" servers but it is marked as remote inrepparttar 134367 configuration to let it operate as a hot-standby server. This setting can also be used in a true remote situation where there are servers in a different physical data center - perhaps for extreme disaster recovery situations.

Webfarms: The Only Way To Host!

Written by Brad Kingsley

Networks can be configured to be so incredibly redundant now - for reasonable prices - that there is no excuse for a data center not to achieve five nines (99.999%) of availability.

But what aboutrepparttar servers and applications? Why spend so much time up front configuringrepparttar 134343 network to make sure it doesn't fail, and then deploy an application to a single server?

Sure, there are ways to make sure individual servers have some redundancy to minimize failures -- things like RAID1, RAID5, or RAID10 (redundant array of inexpensive disks) which will protect against a disk drive failure (and I highly recommend this type of configuration for all production servers - and preferablyrepparttar 134344 use of hardware RAID vs. software RAID). But what happens if a file gets corrupt onrepparttar 134345 RAID array? Or a recent configuration change bringsrepparttar 134346 application down? Or a newly released patch conflicts with other settings and causes problems? Well, in these situationsrepparttar 134347 server will go down andrepparttar 134348 application(s) hosted on that server will be offline.

A good monitoring and alerting process will allowrepparttar 134349 system administrator to detect and address these issues quickly, but still there will be some level of downtime associated withrepparttar 134350 issue. And depending onrepparttar 134351 type of issue, evenrepparttar 134352 best system administrator might not be able to immediately resolverepparttar 134353 issue - it may take time. Time during which your application is unavailable and you may be losing business due torepparttar 134354 site interruption.

So, what can you do?

A great option - and one that has recently become more affordable - is to host your application on a webfarm. A webfarm consists of two or more web servers withrepparttar 134355 same configuration, and that serve uprepparttar 134356 same content. There are special switches and processes involved that allow each of these servers to respond to a request to a single location. For example, say we have two servers - svr1.orcsweb.com and svr2.orcsweb.com - that have 100%repparttar 134357 same configuration and content. We could configure a special switch* to handle traffic that is sent to www.orcsweb.com and redirectrepparttar 134358 traffic to either of these nodes depending on some routing logic. All clients visitingrepparttar 134359 main URL (in this case www.orcsweb.com) have no idea whether this is a single server - or ten servers! The balancing between nodes is seamless and transparent.

[*note: There is also software that could handlerepparttar 134360 routing process but experience and test have shown that these types of solutions are generally not as scalable, fast, or efficient asrepparttar 134361 hardware switch solutions]

The routing logic can be a number of different options - most common are:

Cont'd on page 2 ==>
ImproveHomeLife.com © 2005
Terms of Use