Ok, so what you're essentially saying is that the image sensor is acting as a low pass filter in this case - there is a limit to the resolving power it has, especially in the particular case you've shown. However, the practical aspects of the image sensor actually make this an easier job - since there's a space between each pixel that is not sensed, then even in your vertical line example one could obtain some information about the lines. They would likely show up as high frequency moire patterns, I'm guessing. Further, which pixel are you talking about? In the case of white/black it doesn't make too much of a difference, but in the case of red/green then whether you pick the size of a sub pixel (one color filtered pixel) or a "whole" pixel (two green, one red, one blue) you'll certianly be able to resolve much more information than a simple solid color. -Adam On 1/12/06, Daniel Serpell wrote: > Hi! > > On 1/12/06, M. Adam Davis wrote: > > > > How the camera captures the image is relevant, but your explanation > > does not prove that one could not obtain sub-pixel resolution from > > multiple shots of the same subject. > > > > I don't see a reason why it's not possible. > > Well, the problem really is much more complex, and related to filtering. > > First, an example. You take a photo of a pattern of white vertical lines > on a black background, and each pixel takes exactly 1 line and the > background surrounding it. You see each pixel with the same value (at > half the brightness of the line). If you move the camera to the side, > always you get 1 line per pixel (with some line entering from the left as > another leaves at the right), so you always obtain the same image. > How can you reconstruct the original pattern? > > Into the details, you can describe the process like: > > * The original image ("infinite resolution") goes through the lenses system. > The lens actually low-pass filters the image, convolutioning the image > with the diffraction spot of the lens. > * Then, the image is sampled by the sensor, using rectangular pixels. > This applies another filter to the image (convolution with a box filter), > and then aliases the remaining high frequencies to the lower bands. > > Now, the aliasing effect can be reduced taking new images (of the > *same* data) with the sensor at another (fractional-pixel) position. > > An then you can equalize the new image using an optimal inverse > filter. > > Problem is, there are *some* frequencies that are highly attenuated > by the filters (even some that are zeroed), so you can not restore the > original information. > > To solve this problem, you can move the camera perpendicular to the > image, so the sampling frequency changes. But the reconstruction > process can be *very* difficult, and this can only be done if the image > is very far away, so moving the camera don't change the scene imaged. > > Daniel. > > -- > http://www.piclist.com PIC/SX FAQ & list archive > View/change your membership options at > http://mailman.mit.edu/mailman/listinfo/piclist > -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist