Hi all, I'm developing imaging software for my university which processes very larg= e and high resolution images (from microscopes). The images are around 200 MB in size and not too sure about the dimensions yet. A simple algorithm written in Java by a friend resulted in a run time of about 60 minutes for = a bunch of images (not sure exactly how many, but doesn't really matter). I'm wondering if nVidia CUDA on a suitable video card such as a GTX460 (Fermi) would be a good choice for a task like this. Instead of processing one pixel at a time, would it be possible to use the video card to process multiple at a time? --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .