On 2012-01-05 01:31, V G wrote: > On Thu, Jan 5, 2012 at 1:09 AM, Joe Koberg wrote: >> On 2012-01-05 00:37, V G wrote: >>> I have searched around and found this: http://zfsonlinux.org/ Also, >>> I've read that the ZFS Linux port is incomplete and is missing >>> features. Is this true? What are the limitations for Linux? >> Last time I tried it on a VM, I got a couple freezes. But who knows what >> caused them. For what it's worth, that website lists maintained code, >> because the pool (v28) and FS (v5) versions are the most recent ones >> supported in OpenSolaris before Oracle shut it down. I suspect it may >> crash your kernel, but it's less likely to scribble over your data. >> >> I think there's also a FUSE driver. http://zfs-fuse.net/ >> >> If it's missing the same features as FreeBSD, like iSCSI support, I >> wouldn't worry too much. Looks like the main issue is performance. > Speaking of performance: > > http://www.phoronix.com/scan.php?page=3Darticle&item=3Dzfs_ext4_btrfs&num= =3D2 > > BTRFS blows away ZFS in performance. I wonder what features it lacks > compared to ZFS. * no RAID5 or any other parity RAID. no multi-way mirrors. Looks like=20 people still use MD+LVM to do anything more than trivial multi-drive=20 setups. * It looks like recovery actions are manual. You have to mount the FS in=20 "degraded" mode, for example. Looks like there might not be automatic=20 recovery on invalid checksum reads. * no Incremental dump / snapshot streaming. * no FSCK (but as far as I know ZFS doesn't have fsck either. From the=20 beginning, scrub, checksums, and log based structures eliminated the=20 need. I don't know why this is such a big deal for btrfs, but the amount=20 of noise about it worries me.) * Lacks 5+ years of heavy production use as the default FS for a major=20 commercial OS. > > I'm looking into the reliability of the current BTRFS at the moment. > If people are saying it's "good enough" for non-critical (as in, lives > are not dependent on its stability) use, then I'll go with Linux and > BTRFS. > Based on what I'm reading I guess I wouldn't be comfortable using it=20 yet. It seems to me your bottleneck will be the NIC, not the storage=20 system. A gigabit NIC can only move about 100 MB/s. Most cheap SATA=20 drives already exceed that bandwidth singly, for large reads. Might be=20 worth looking at adding NICs and switch ports instead of a load-balanced=20 machine. (FreeBSD will also happily support Cisco FEC/GEC or 802.3ad=20 ethernet link aggregation if your switch does.) Joe --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .