Herbert Graf wrote: > On Thu, 2009-04-23 at 10:02 -0400, Rolf wrote: > >> Based on some discussion here, I decided to do some tests of my own.... >> >> I have a gigabit network with Jumbo-frames enabled. >> >> I created a 5Gig file on vista (ultimate 32-bit), and transferred it to >> a samba share on my ubuntu server (8.10 - with the share folder being on >> a RAID0 array capable of writes of 70MB/s) at a rate of 30MB/s or so >> (about the read speed of the vista disk). >> > > FWIW I've been able to achieve about the same speeds over smb and nfs > shares sitting on my linux boxes. The interesting bit is I know the hard > drives are capable of much more (around 60MBps). Also interesting is the > raw network rate between the machines (tested with ttcp) is around > 80MBps. > > My only guess is the PCI bus is being saturated. Does anyone know if the > SATA controllers on modern MBs still share the PCI bus with the physical > slots? That would explain the drop in speed. > > Again, FWIW, I recently did my network 'upgrade' to jumbo frames. For those interested, I have the following to say... Small home network (6 computers + a printer). got D-Link PCI gigabit cards. These support an 8170byte Jumbo Frame. All seemed to be fine with thinks working OK, but, about a week later, my server crashed. After investigation, it seems there is a driver error in the linux code for the D-Link card. It tries to allocate contiguous (4K) blocks of memory for a large frame, but the allocation fails (fragmented memory is 'normal'), so the driver simply retries and retries until the network transmission times out, at which point the kernel reports a failure, and shuts down the interface.... essentially killing a network-based server. So, I 'upgraded' to an Intel PCI-Express Gigabit card (the Pro1000 PT), and, what a difference (apart from being 3x the cost of the D-Link)... Supports full 9000 byte jumbo frames, and is very friendly on the CPU. I decided to 'burn in' the network cards (I got 2, one for my primary workstation as well as one for the server), and I set up a Java process on each system to stream data from one machine's 'bitbucket' to the other's. I got sustained transfers of 95MB/s from the Ubuntu to Vista machine, and sustained 73MB/s from the Vista to the Ubuntu server. In each case the data was using jumbot frames. When running the data simultaneously in both directions (full duplex), the transfer rates dropped to about 60MB/s and 45MB/s respectively. On both machines, about 2% of 1CPU was used for the entire thing. I have real tests on my disk system that show the write speed of the one filesystem is at least 70MB/s sustained, with 120MB/s peaks. So, perhaps I should test the bit-bucket streaming while simultaneously testing the disk subsystem, and see if the PCI (PCI-X) systems are co-dependant. Rolf -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist