In SX Microcontrollers, SX/B Compiler and SX-Key Tool, AtomicZombie wrote: Yes, now that I have had some time to sketch this down, I see what you mean. If I had no frame buffer, all I could really due is blank part of the screen - not compress it. I did find one other way to place a live image in the upper left quadrant while being able to fill the other three quadrants with info, although it is a bit ugly.... Since the 1/4 size resulting video will have a resolution of 300x200 pixels at best, why not just pass the entire ntsc signal from the camera's input out to the destination output then mask right over the signal? Yes, this would waste most of the cameras image, but it would end up doing the same thing as compressing the image. So it goes like this... wait for vsync - start loop wait for hsync then wait for half of the image to be drawn, or about 400 pixels software generate the info for the first quadrant - top right once 200 lines or so have passed, generate the other two quadrants start over Yes, this is really just an overlay that leaves 1/4 of the incoming camera signal unmasked, but it would really do the same thing as my first idea. The only con is that you would have to remember to aim the camera slightly to the left since only 1/4 of it's ccd will be used. Thanks for all the great ideas - I now have a rough plan on how to attack this project. Should have the SXKey in a week, so I will post my results - if there are any :) Next project - to generate color video using the ad725 IC.... [url=http://www.analog.com/en/prod/0,2877,AD725,00.htmlThis]http://www.analog.com/en/prod/0,2877,AD725,00.html This[/url] IC looks very promising. Cheers, Brad [quote="Paul Baker"] Interesting project Brad. If Ive thought this through properly, your application will require a frame buffer. First if you place the input video into the 1st, 2nd or 3rd quadrants (upper right, upper left, lower left) you will have a frame delay of one, because the input video wont present its current frame's lower right pixel until the very last pixel, but you will need to place it in the output video before then. If you place it in the 4th quadrant (lower right) there is no needed frame delay, but you need to store pixels for the output. It turns out regardless of which quadrant you'll need to store roughly the same amount of data. In the 4th quadrant, youll need to store the first half of the subsampled pixels (ballparked, theres the question of the first half of the next row, after the pixels are placed in the output frame the storage can be reused). If the input video is sampled into 320x240 and subsampled to 160x120, the first half subsampled frame will need storage or 80x120 or 9600 pixels. Using 256 bytes on the SX52 (theres a few more but your program will need its own variables) thats 2048 bits, so not even a black and white rendering is possible. A 40x30 (1/16th) frame would require 600 pixels of storage enabling 3 bits of greyscale per pixel. So you see, your going to require more than 8K for 8 bit greyscale, or half for 4 bit. But the SX48/52 is fast enough to access external fast SRAM (and enough pin out to not require external logic) with time to spare. If you want a simple memory layout (each pixel has a unique storage location) it will require 19200 bytes for 8bit, 9600 for 4 bit. [/quote] ---------- End of Message ---------- You can view the post on-line at: http://forums.parallax.com/forums/default.aspx?f=7&p=1&m=87544#m87899 Need assistance? Send an email to the Forum Administrator at forumadmin@parallax.com The Parallax Forums are powered by dotNetBB Forums, copyright 2002-2005 (http://www.dotNetBB.com)