It seems to me, that the theoretical beast you want, is a learning program, that learns the characteristics of the Background noise, and feeds them back as a negative feedback attenuating the background noise. Then all you need to do, is use the remaining signal to run your doppler to bcd conversion program from. One way of doing this, might take a couple of processors to achieve, but essentially, would simply involve the user pushing a "Calibrate Button" in order to "Learn the Environment". This button, would take the doppler reading of the background noise, as measured by the doppler converter, and convert it into a numeric value, which could then be used to generate a doppler signal that was at the same frequency, and in phase with the background noise. The "Digitized Signal" would of course be "Cleaner" than the original, creating feedback and interference patterns, which could likely be filtered out with a band-pass filter that blocks anything not in the "Interesting" range. The second processor, (or second thread on a fast enough base processor) would simply convert the numeric value back into a signal, which would probably be fed into the negative bias of an amp to attenuate the signal. Since the background would be essentially attenuated out of the signal, the larger interesting signal would be the only signal that would get measured. CAVEAT.... This only would work, if the FAN ran at a constant rate.... Old fans tend to surge, which might result in a varying frequency... In that case you would have to do a second order analysis to get the surge frequency, so you could damp that with the feedback system. Anyway, it sounds like an interesting problem. GREY GRAEME SMITH email: grysmith@freenet.edmonton.ab.ca YMCA Edmonton Address has changed with little warning! (I moved across the hall! :) ) Email will remain constant... at least for now.