>Some time ago there was a thread about DTMF decoders and one gentleman >stated that he had developed a fully functional DTMF >decoder that used an incredibly small amount of PIC processor resources. He >could not share the details with us because he >had done the work for his employer. I'm sure I know how he did it. I >stumbled on the trick quite by accident while testing >software I wrote for my employer. I found the rest of this fascinating and well worth the band width. But, either I missed something in the description or the author did. How does he become synchronized with the input signal in the first place. This technique sounds like a halfway measure to implementing a phase locked loop (PLL). With a PLL you lock to the input signal and synthesize a copy of it with the PLL VCO. The VCO output is the synchronous signal used for detection. The noise immunity comes from the loop filter which is designed to respond best to the desired input frequency. Since DTMF tones are not related harmonically you need a PLL for each tone. It seems that the scheme actually ignores synchronization. This doesn't mean that it won't work though. Assuming that the detector and source are running at exactly the same frequency, the output level will depend on the phase relationship between the two. The output could be zero if the relationship is 180 degrees. If the two frequencies are not identical, and they probably aren't, then the output will beat at the difference. This is probably the signal he is actually detecting. This also integrates to a non-zero level, as in the synchronous case, but is dependent on the capacitor time constant to attenuate out of band frequencies. This will probably work as long as there is a tuned RC filter for each of the DTMF tones. Because of the beating there will need to be several cycles of input and that means that the detection will be slower that is ultimately possible. Can anyone clear this up for me? Thanks! Win Wiencke ImageLogic@ibm.net