On 1/11/06, Stephen R Phillips wrote: > > > --- Gus Salavatore Calabrese wrote: > > Another issue I was pondering was whether the resolution of the > > camera could be improved by taking a shot, moving it a fraction of a > > pixel, taking a shot, moving it a fraction of a pixel, ....... > > Would it be possible to compute sub-pixels by doing this ? > > No. Ok LITTLE bit of information about sensors, here. Your typical > camera image element is a grid of monochromatic sensors with pass > filters above them. They are arranged in what is termed a bayer > pattern. Something like > > R G > G B > > The reality is your typical digital camera's resolution SAYS 3 mega > pixels for example, but it most certainly is NOT 3 mega pixels. It's a > bit of deception. They estimate the color at the other pixel location > by converting the pattern through a filter into RGB pixels. However to > be blunt and to the point, the RGB values are a guess at best. A more > expensive but acurate system involves precise lenses and dichromatic > mirrors and 3 image sensors. A company was developing a sensor that was > true RGB however I've not seen it hit the market. How the camera captures the image is relevant, but your explanation does not prove that one could not obtain sub-pixel resolution from multiple shots of the same subject. I don't see a reason why it's not possible. In fact, some of the algorithms to convert the bayer pattern image to a "regular" image are applicable to the problem. Nasa uses these techniques to produce high resolution images of Mars. Imagine a one pixel, monochromatic camera. You've taken four images of one subject, and offset each image from the others by 1/3 of the pixel size. You now have four images. If you overlay them on top of each other relative to the area of the picture you end up with a 3x3 table - each image covers four cells in the table - the central cell has been imaged 4 times, but each image contains only 1/4 of the information required for the center cell. the four corners were each imaged only once. The remaining four cells were imaged twice. Using linear algebra it's possible to obtain a 9 pixel image from those four single pixel images. It won't be as good as a real 9 pixel image, but it'll be much better than the one pixel image. If you want to do the same for a more complex situation (more pixels, bayer pattern, etc) then there are a number of ways to extend the solution. In any case, yes, it's possible to increase the resolution of the image by taking multiple pictures of something with the camera slightly offset. Also, with a telecine camera it won't matter as much, but in most cases you want to step the image sensor one sub-pixel rather than stepping the lens and image sensor together. As an aside, Foveon is the company with the neat stacked sensor. I haven't heard much about them recently, but they would be ideal for increasing the resolution by stepping the image sensor. They did release their first sensor, which is available in a Sigma camera. You can get a sensor evaluation kit as well: http://www.foveon.com/ . -Adam -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist