V G wrote: > I'm developing imaging software for my university which processes > very large and high resolution images (from microscopes). The images > are around 200 MB in size and not too sure about the dimensions yet. > A simple algorithm written in Java by a friend resulted in a run time > of about 60 minutes for a bunch of images (not sure exactly how many, > but doesn't really matter). > > I'm wondering if nVidia CUDA on a suitable video card such as a GTX460 > (Fermi) would be a good choice for a task like this. Instead of > processing one pixel at a time, would it be possible to use the video > card to process multiple at a time? Possibly, depending on how easy it is to slosh different sections into and out of the hardware and to program the hardware to what you want it to do. Image processing algorithms is one case where you have to watch the cycles since the inner loop or two will be executed many millions of times. That makes Java a dumb choice. Before trying to get into the NVidia hardware, I would try regular compiled code written with efficiency in mind. Most modern PCs have multiple cores, so breaking up the algorithm into a small number of parallel executing sections would be good. You may get good enough performance doing it this way. Keep paging and cycles and parallelism in mind, and ordinary code should be able to beat the Java app handily. ******************************************************************** Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products (978) 742-9014. Gold level PIC consultants since 2000. --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .