>> Well, some scratch area is fine, I am not opposed to so >> memory used for >> compression. However it seems most algorithms want LOTS >> of memory, and I've >> seen some that say they will only work on 16 or 32 bit >> architectures. The only >> problem I see with RLE is perhaps the signal slowly >> changes, but just enough >> that there aren't any runs to encode... perhaps then I'd >> need better filtering >> if it's due to noise... 1. Some sort of Hamming coding perhaps. If it's usually different but slow moving then it must be "hunting" over a limited range and there will be patterns to it. eg if 0 is down 0.1 degree and 1 is up 0.21 degree then an eg 010010100111101 pattern may well be typical. If you CARE about such fine changes then you could have a lookup table of sequences. 2. If changes are usually only by a few units you could compress the step into a few bits. eg nibble coding. 0000 Use extended code 0001 +0.1C 1001 -0.1C ... 0110 +0.6C 0111 +0.7C 1110 -0.6C 1111 No change. Giving a +0.6C to -0.7C range in half a byte at the cost of 4 more bits when the step is greater than this value. Or, if most steps are under about 0.3C you could use 3 bit coding and not align at byte boundaries. Fun will ensue :-). 3. If there are long periods without change then logging the time when changes occur may be more data economical. 4. ADPCM 5. SD memory is getting so very very very cheap that using it if possible sounds very attractive. Russell -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist