The number of pixels in the camera is almost irrelevant if the signal is recorded or transmitted. Current standards are mostly based on D2 which uses color decimation and optional (simple?) compression. Current schemes compress like 4:2:2 (for bits of Y, B-Y and G-Y respectively). This is the case for most satellite tv and broadcast feed data. 8:8:8 is never used afaik. The internal data paths in current state of the art ENG and pro consumer cameras ($3k to $30k range) are 3 x 12 or 14 bits at the full pixel rate (can be >20MHz for HDTV), one channel per trichrome imager. Single pickup cameras don't count quality-wise. That fire-hose is compressed to D2 or to internal studio bus bandwidths before leaving the camera head. Most current HDTV and MPEG/OGG etc standards are designed around the constraints imposed by such data streams, adapted to 'reasonable' screen refresh rates and screen sizes. For example decompressed MPEG2 encoded data bears a suspicious resemblance to certain D2 data formats (not by accident). Peter -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist