> Need some notion of optimization/resolution/speed criteria. > Interpolation, series expansion and CORDICS > seem to be the obvious candidates. Maybe we should 'choose sides' > and publish all, targeted toward the 8bit territory. I'll do > (at least) one of them. _I guess I mean the elementary trig > functions to solve just this range of problems_. The problem I'm trying to solve, is to "de-noise" an input that gives me an 8 bit "bearing" value. (0-255) over a full circle of rotation. I may only get a very short burst of samples, (16 is my minimum) or a lot more, and I intend to use all that I can get. Each sample is converted to rectangular notation (by sin/cos table lookup) and I average in four layers of 16 sample pairs, down to a final XY pair. In the first layer, I use a radius of "1", scaled to 127 to keep the X and Y values within signed 8 bit values. When the first layer bearings are averaged into a single entry in the second layer, (also true for 2>3 and 3>4) the X and Y values then carry radus information that will end up as <=1. If the bearing is very noisy, then the radius will be rather a lot less than 1, and if quiet, it will closely approach 1. The final pair are signed bytes, which represent sin and cos on a circle of radius which is almost certainly not 1. The conventional operation of A=csc(X/Radius) performs the scaling of X out to the unit circle. Speed, in my application isn't a huge issue, all the averaging is done between samples at 300-4000 samples/sec, but this final step is done when either the input signal stops, or when I have 65000 samples collected, so I can afford to miss a few while I process.