On Fri, Mar 25, 2011 at 11:33 AM, V G wrote: > Hi all, > > I'm developing imaging software for my university which processes very la= rge > and high resolution images (from microscopes). The images are around 200 = MB > in size and not too sure about the dimensions yet. A simple algorithm > written in Java by a friend resulted in a run time of about 60 minutes fo= r a > bunch of images (not sure exactly how many, but doesn't really matter). > I'm wondering if nVidia CUDA on a suitable video card such as a GTX460 > (Fermi) would be a good choice for a task like this. Instead of processin= g > one pixel at a time, would it be possible to use the video card to proces= s > multiple at a time? Yes it is possible, depending on your algorithm you can attach process-pixel/thread like schemes and number of cores you have will determine how many threads you can run simultaneously. There is a CUDA library for java as well but don't think Java is the best option here - stick with the C API. With a GTX460 you will have around 300 cores available for your execution space. If images are around 200MB then you have 343597383680000 bits for each file and since each color is represented with an 8-digit hexadecimal > assuming pictures are squares you have 3276800x3276800 pixel approximately > 10737418240000 in total. GTX460 has 37.8 billion/sec texture rate thus it will take about 284 seconds to complete processing each pixel of one picture ; assuming ideal operating/conversion condition. For a real world implementation I say it will be around 400~ seconds. With an SLI setup it can reduce to 210-220 seconds. Stick with the C API. --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .