May I suggest a ZFS based filesystem? FreeBSD, Solaris, OpenSolaris, and Linux via beta 3rd-party driver,=20 support this advanced filesystem. I would trust Solaris or FreeBSD the most= .. Every ZFS metadata block contains a checksum of the blocks it=20 references. Every time ZFS reads a block, it verifies the checksum. If=20 it's not right, it reads the copy from the {mirror/raid} and rewrites=20 the original block (thus self-repairing). This is far more advanced than=20 simply checking if the IO channel returned a read error, and then trying=20 it on the other IO channel (RAID-1). Even if the IO channel returns no=20 errors, the data could still be corrupted. In fact, a standard hardware=20 RAID will be happy to copy corruption from one disk to another in the=20 name of "recovery", as long as the disks don't report read errors. ZFS=20 can identify incorrect data coming off the disk, something a hardware=20 RAID controller or a software emulation of a RAID controller cannot. ZFS supports volumes of many mirrored sets (my preference), or groups of=20 RAID with single, double, or triple parity. (Meaning one, two, or three=20 drives can fail). ZFS also supports real scrubbing (reading every allocated block and=20 comparing it to its checksum)... Which done regularly, will identify and=20 self-repair disk errors in a very reliable way. It also supports=20 in-place growth, zero-cost snapshots, replication, etc, etc.... On Solaris it supports transparent kernel-level Windows file service via CIFS/SMB, with ACLs et al. Joe Koberg, AE5NE joe@osoft.us On 2011-12-29 16:05, V G wrote: > On Thu, Dec 29, 2011 at 2:30 PM, Herbert Graf wrote: >> Unfortunately been there, done that, have the t-shirt. >> >> While it SOUNDS like a good idea, the issue is stability, and >> reliability. > Really? It didn't even come close to sounding like a good idea. > >> First, jsut the logistics: to support 8TB in RAID, that means probably >> 16TB of disk space. Depending one what kind of speed is needed, that can >> be 6-16 disks (or more). That's alot of SATA ports, meaning an expensive >> SATA RAID card (please, don't even consider the "software" RAID options >> consumer hardware pushes all the time). > Why do you say software RAID unsuitable for this? > >> That's just the beginning, getting a case and power supply for this is >> non trivial. > Yeah, most likely going to buy a Dell or IBM server. I still need to > know what to look for (like RAID cards, storage expansion options, > etc.). > >> After you've put together this beast, keeping it up will be a big job >> (budget an hour or 2 a day). It will fail, with so many drives in such a >> small space, it will probably fail often. > The U of T IT department will handle the maintenance. > >> In the end, unless you've got tons of time to waste and your users don't >> mind the down time, using a commercial solution will be FAR better. >> There are lots of options out there. Considering the amount of drive >> space required, chances are you'll need 2 units, the "server", and a >> storage tower. > Ah. That makes sense. > >> It would be really useful to the op to have someone local familiar in >> this sort of stuff guide them, does the University in question not have >> an IT department? > They do, but their competence is questionable (from experience). > > APPROXIMATELY how much do you think the full server setup would cost? > $100? $1000? $5000? $10000? Just the approximate range. --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .