I know, HP is expensive, but anyway. I'm working at mobile telecom company, and we are using (in our department) exclusively HP hardware. Servers are running 24/7 with allowed service downtime 10 minutes a year. Hardware failures never had stopped our services, those always were software problems. We had a EVA array with 52 or so hard drives, and had replaced probably 4 disks in 5 years in that array, and 2 or 3 fans. That platform (6 racks) was replaced with newer one (2 racks). It already had one motherboard replaced, but still no disk failures in two years now, if I remember right. Systems are running in properly maintained server rooms, about 18 degrees celsius. So all I can say - HP is probably worth the money. On Fri, Dec 30, 2011 at 01:35, wrote: >> After you've put together this beast, keeping it up will be a big job >> (budget an hour or 2 a day). It will fail, with so many drives in such >> a small space, it will probably fail often. > > This becomes a non-trivial task as some of my friends at work will attest= - they are looking after large arrays of disk with so many disks that they= reckon on a disk failure every week, almost "without fail" as the saying g= oes. With the number of disks in these arrays the manufacturers MTBF figure= about matches the number of failures they see. > > These arrays are petabyte arrays for data streams coming from satellites = and tier 1 data storage from the Large Hadron Collider. But the size of you= r array, even if built with 1TB or larger drives will still have a notable = impact on the MTBF rate, so you do need to budget in a couple of spare driv= es, possibly set up as hot spares. > > > -- > Scanned by iCritical. > > -- > http://www.piclist.com PIC/SX FAQ & list archive > View/change your membership options at > http://mailman.mit.edu/mailman/listinfo/piclist --=20 KPL --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .