Hi, Even if you convert the Java code to native C++ you will achieve a huge acceleration in that. I mean no .NET, but a C++ that compiles to a native machine code. nVidia is a good choice anyway as they provide an API so you can use it's GPU for calculations. I am not familiar with these video cards and their capabilities, all I know that they are pretty fast for certain tasks, plus that if you need even more processing power you need to buy an FPGA card something like this: http://www.fastertechnology.com/ Tamas On Fri, Mar 25, 2011 at 9:33 AM, V G wrote: > Hi all, > > I'm developing imaging software for my university which processes very > large > and high resolution images (from microscopes). The images are around 200 = MB > in size and not too sure about the dimensions yet. A simple algorithm > written in Java by a friend resulted in a run time of about 60 minutes fo= r > a > bunch of images (not sure exactly how many, but doesn't really matter). > > I'm wondering if nVidia CUDA on a suitable video card such as a GTX460 > (Fermi) would be a good choice for a task like this. Instead of processin= g > one pixel at a time, would it be possible to use the video card to proces= s > multiple at a time? > -- > http://www.piclist.com PIC/SX FAQ & list archive > View/change your membership options at > http://mailman.mit.edu/mailman/listinfo/piclist > --=20 int main() { char *a,*s,*q; printf(s=3D"int main() { char *a,*s,*q; printf(s=3D%s%s%s, q=3D%s%s%s%s,s,q,q,a=3D%s%s%s%s,q,q,q,a,a,q); }", q=3D"\"",s,q,q,a=3D"\\",q,q,q,a,a,q); } --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .