... let me add my 2cents... As far as I understand about "interlace", it is not an advantage for image capture, but for exposure. There are two main reasons why to use interlace: First: ------ When the TV CRT needs to produce a complete frame, the idea is that the whole image should have a flat image brightness. A TV CRT has a (focus adjustable) image dot size, that by the phosphor issue, the center of the dot is brighter than the surrounding, like a flashlight spot. By this reason, the real center of a raster image line is brighter than the borders. To avoid this "image darkness" to be visible, the image should be assembled with raster lines as much close as possible, so the "center" of raster line 56 would be overlapping the darker border of the line 55. Problem is that line 56 also has a darker border, even being not so bright as its center, it will overlap the bright part of line 55. Because it happens so fast (to your eye), line 56 border will actually replace part of the line 55 image, so the image loses quality. To eliminate this problem, the interlace counts with your eyes. Overlapping image "odd lines" to "even lines" in one single frame scan, your eyes will see quality loss. If keeping the same lines position, but first exposing only the "odd" lines complete and than the "even" lines, the phosphor will not suffer the strong overlap effect, and your eyes either, so your brain will "see" a better quality image. The interlace is only useful when the "phosphor image decay" has certain relationship in time with the scan time itself... Suppose that you increase your TV scan frequency from 30 fps to 120 fps (repeating 4 times the same whole interlaced frame), the interlace will lose effect, since the raster will be much faster than the phosphor decay, so the overlap will happens even with interlace. Second: ------- Phosphor decay and scan time. The phosphor decay is one of the major elements to any CRT quality. A long decay creates a vivid and bright image, but with blur and lose in definition, since overlaps happens all the time. A short decay creates a sharp and well focused image, but lose in flat brightness over the entire image, since when the raster is being formed at the bottom of the image, the top is already decaying and getting dark, so scintillation is the effect. Today's TV and image monitors are not working only in dark family rooms with all lights off and all the family members quiet. Today they are installed outside, ambient light all around, so they need image as bright as possible. The most economic way to produce a bright image is by increasing the phosphor decay time, but remember, it loses quality and sharpness. One way to increase phosphor decay and eliminate the blur effect, is interlacing. It gains instantly 50% of image quality, since the vertical raster will be split in time, so no more overlapping. The horizontal raster still a mess, but your eyes can notice an image improvement. So, I guess, the CCD chip capture all the image elements at once, close the electronic shutter, and deliver to you in interlaced mode, just because *you want this way*. There is no advantage to the CCD chip to do this, the advantage is to the CRT add your eyes. Wagner. > #4) A corolary to #3 - When a CCD camera gives interlaced output, is the > shutter only open once per frame, or once per field? In other words, can > the interlacing cause motion blur problems?