On Sat, 15 Jun 2002, Olin Lathrop wrote: > > >One approach would be to "dither" the rounding by simply adding 1 or > adding > > >0 on alternate iterations. > > > > I think that there is no way to make that low pass function span the > > entire output domain. Because the math specifies an asymptotical approach > > of the output value to the input value and asymptotes do not exist in the > > integer math world. > > True, which is why a few fraction bits do allow the filter to span the > entire range. I usually make the filter value fixed point with a few extra > bits on the right. For example, a common case is filtering a 10 bit A/D > value. I use 16 bits for this with the 10 bit A/D value in the high bits. > That leaves 6 fraction bits for filter roundoff. It will work exactly as > long as the filter fraction is 2 ** -N, where N is from 0 to 6. Actually, not entirely true. As I showed in an earlier post, all one needs to do is reconstruct the equation so that the asymptotes converge in proper direction. Most of the time when these integer weighted filters are presented, the new value of the filter is the old value of the filter plus a term that is weight by an input: filter = filter + (filter - input)*weight ....... eq(1) I agree when constructed this way that it's not possible using integer arithmetic constrained to the dynamic range of the data to span the entire dynamic range. However, if the equation is rearranged: filter = input - (input - filter) * weight' ....... eq(2) it is. Mathematically these are identical (weight' is equal to 1-weight). However, there are several subtle differences when integer arithmetic is used. In both cases, if the difference the new input and the current state of filter is less than the reciprocal of the weighting fraction then the integer arithmetic truncates the result: if( abs(filter - input) < 1/weight ) then abs(filter - input) / weight is zero. In eq(1), this means that the filter will not change states. In eq(2) this means that the new filter state will become the same as the input. In other words, if the input spans to the extremes of the dynamic range then the filter output will too. Round to Zero ------------- Now, you don't get something for nothing. As I showed in the PIC assembly implementation of this algorthm you do need to control the rounding. Specifically, you need to monitor the sign of the difference and round towards zero. If the difference is positive, then the integer division automatically will round toward zero. If the difference is negative you need to add 1 to the final result. In psuedo code: ; compute (input - filter) / W ; where input, filter, and W are all integers. ; implement "rounding toward zero" result = (input - filter) / W if( result < 0) result++; > > By the way 1/64 is a very heavy filter for most purposes because it has such > a slow response time. It has a 50% step response time of 45 iterations and > 90% in 147 iterations. Splitting this into two poles of 1/8 each increases > the cycles to compute it but also the response time. The 50% response drops > to 12 iterations, and the 90% response to 29 iterations. Both filters have > the same 18dB attenuation of random noise. > > > I also think that the fiddling with conditional > > clauses will introduce points of discontinuity into the mapping function > > and if this is a part of a servo or some control loop it may react > > strangely to those points. > > I agree. That kind of fiddling will only lead to trouble. In the general case, this is true. However, the "rounding toward zero" algorithm has a discontinuity at zero. Fortunately, this is also the point where the input has no effect on the filter. It's kind of like sin(x)/x for x equal to zero. For a weight factor of 1/2, it's interesting to see how the two filter constructions behave: (A - B)/2 A-B round to zero floor ------------------------------------------ 0 0 0 1 0 0 2 1 1 3 1 1 -1 0 -1 -2 0 -1 -3 -1 -2 -4 -1 -2 In other words, the two are the same when the difference is positive. When the difference is negative, the "round to zero" algorithm is one greater. The discontinuity is probably easier to see: rtz -1 -1 0 0 0 0 1 1 2 floor -2 -2 -1 -1 0 0 1 1 2 diff -4 -3 -2 -1 0 1 2 3 4 The round to zero algorithm results in zero for 4 values whereas the floor algorithm results in zero for only two values. This behavior of the rtz algorithm is clearly discontinuous. However, this dicontinuity works in your favor when applied to eq(2). Let's take an example: weight = 1/2 filter = 8 input = 10 eq(1): 1) filter = 8 + (10 - 8)/2 = 8 + 1 ===> 9 2) = 9 + (10 - 9)/2 = 9 + 0 ===> 9 eq(2): 1) filter = 10 - (10 - 8)/2 = 10 - 1 ===> 9 2) = 10 - (10 - 9)/2 = 10 - 0 ===> 10 Now let's go the other direction. weight = 1/2 filter = 8 input = 6 eq(1): 1) filter = 8 + (6 - 8)/2 = 8 - 1 = 7 2) = 7 + (6 - 7)/2 = 7 - 1 = 6 eq(2): 1) filter = 6 - (6 - 8)/2 = 6 - 0 = 6 Here's another interesting test weight = 1/2 filter = 8 input = 7 (differ by 1) eq(1): 1) filter = 8 + (7 - 8)/2 = 8 - 1 = 7 eq(2): 1) filter = 7 - (7 - 8)/2 = 7 - 0 = 7 Both filters converge, but the round to zero does so more quickly when the difference between the input and filter is even. So as far as non-linearities and discontinuities, I think it's clear to see that the "frequency response" of both these filters is approximately the same. In all cases, the number iterations required for the two filters to reach their final states differ by at most one. The round to zero implementation however has the benefit of always converging toward the input value. Scott -- http://www.piclist.com hint: To leave the PICList mailto:piclist-unsubscribe-request@mitvma.mit.edu