> > FILT <-- FILT*(1-F) + NEW*F > > Man, here is where I run into a wall. First of all by "<--" I meant assignment. (It was supposed to look like a left arrow. I don't like using "=" for assignment because "=" is already used in mathematics as a statement of equality). In other words, FILT receives the value of the expression to the right of the "<--". Suppose I wrote the operation (its not an equation) like this: FILT <-- FILT*A + NEW*B Perhaps now it is more obvious that this "blends" FILT and NEW to make the new value of FILT. For example, if you simply wanted the average of FILT and NEW, then A = 1/2 and B = 1/2. What would you do if you wanted to "average" the two but wanted one to have a higher weight than the other. You could, for example use A = 1/4 and B = 3/4. That would blend the two but NEW would be weighted higher. Note that you achieve a weighted blending between the two input values as long as A + B = 1. Since A + B = 1, you can also say A = 1 - B. Once you have determined B, you no longer have a choice about A (and vice versa). So let's re-write the operation to eliminate A and derive it from B instead: FILT <-- FILT*(1-B) + NEW*B Same thing as at top, but I happened to call the blend adjustment factor F instead of B. As I said before, this operation is easy to compute when F = 2**-N. For example, let's use N = 3. That means F = 1/8 and 1-F = 7/8. The filter operation becomes: FILT <-- FILT*(7/8) + NEW*(1/8) This means the latest reading will be responsible for 1/8 the filtered value, and all other previous readings determine 7/8 of the filtered value. Also consider the weighting of any one reading. It starts out at 1/8, after the next reading is filtered in, it has a weight of 1/8 * 7/8. After another reading it is 1/8 * 7/8 * 7/8, then 1/8 * 7/8 * 7/8 * 7/8. After J new readings are added to the filter the weight of the preceeding reading will be 1/8 * (7/8)**J. Or symbolically F * (1-F)**J. So now it is obvious (I hope) that every new reading starts out with a weight of F, then exponentially decays as new readings are accumulated i| where FILT is the value to be updated, NEW is the accumulated value of the > new reading, and N is the number of new readings that have been > accumulated. Forget this "number of accumulated readings". You don't need to keep any persistant state except FILT. You are still thinking "average". You need to get past that. > That's pretty much what I've got now, really. On every iteration I do > some other data collection and calculation, then average out the most > recent 20 (could be more or less depending on available RAM) pulse widths, > calculate the RPM from that, and display the lot. Actually that sounds quite differnt from what I suggested, but I don't want to go there again. > Aside from code size > and speed (neither of which is an issue here), what would I gain by using > the single-pole low pass filter? The advantages are code size, speed, greatly reduced RAM requirements, and a somewhat better response characteristic. It is also benificial to learn this because someday code size and speed will matter, and it will be useful to understand how to do a single pole filter. By the way, a multi-pole low pass filter is just multiple single-pole low pass filters cascaded. And a high pass filter is just the original signal minus the low pass of that signal. You can do a lot of useful things with the basic single pole low pass filter building block. ******************************************************************** Olin Lathrop, embedded systems consultant in Littleton Massachusetts (978) 742-9014, olin@embedinc.com, http://www.embedinc.com -- http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads