On Thu, Jan 5, 2012 at 2:28 AM, c h wrote: > In the beginning you could apply some simple logic, like this: > - each server would maintain the table of current load distribution > between servers, and on the request it would decide wether to serve > the request or to redirect the request to a less loaded server. The > table of load distribution should be refreshed at the reasonable rate. > Most requested files should be rellocated to less loaded servers. > Really, with your linux background you probably could cook up the > logic better than everyone on this list :-) Thank you for the information. I was just curious if there was some "standard" way to do load sharing that all the big boys use (google, etc). Maybe some daemon that was already made for this purposet Looks like I'll go for something similar to a single "routing" box which handles the database/logins/requests and 2 file servers with identical data. Since clicking around the web interface will be minimal and most (like 99%) of the time the user will be downloading data, the routing box should be able to handle all of the login requests and maintain a table of how many users a server is serving and appropriately direct the user to a particular server. I'm thinking the routing server will be running Linux with a small hard drive, and two file servers running Linux or FreeBSD with ZFS (BTRFS when it supports RAID 5-like structures) on RAID-Z with single parity. --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .