>Regarding the slow FP sqrt function: If you're doing FP on a pic, speed >probably isn't a big concern. I'm certainly not defending slow code, but >sometimes having slow code that works is more important than fast code that >you don't have! > >Seriously, aren't most of the 300 cycles sqrt functions actually fixed >point? I think all the clever sqrt routines I've seen are fixed point, if >not integer, algorithms. Probably fixed point (a lookup table + interpolation). On the other hand, improving efficiency of a floating point calculation based on polynomial approximation is quite feasible: y = a0 + a1*x + a2*x^2 + a3*x^3... You keep partial sum in one memory location and calculate an*x^n in another. After certain value of n>=N, the additional terms are small enough to be omitted, but this N also depends on x. An easy way is to compare binary exponents of the partial sum and current term (an*x^n). These exponents are stored in the first byte of the floating point number, biased by 0x80, (exponent 00 belongs to number 0.0). If you require say ten valid digits in your result, you perform the calculation until the exponent of the term is either by eleven or more smaller than the exponent of the partial sum or equal to zero (can't go smaller). It would require some four or five instructions each loop you can probably afford. The only problem is you have to dig thru the routine to find the place. If the original poster does not hurry, I would volunteer to make the improvement, than post the new sqrt function to web. Josef euroclass@pha.pvtnet.cz