I have been watching various floating point threads for a while. Having some experience with precision instruments with floating point displays, I point out that I have never, in fact, found it necessary to actually do the calculations in floating point. I usually scale until I can do a fixed point operation, or operate with a scale factor. I did a acoustic bearing indicator, for instance, which did a four quadrant arctangent, but instead of operating in radians and converting, I did it with an integer Chebyshev polynomial which produced the answer in integer tenths of a degree. Then displayed the number with a standard binary integer to decimal routine. Hardwired the decimal point. This was done with 12 bit arithmetic (3600<4096) leaving lots of CYA room if I needed. For instance, I could improve to hundredths of a degree using 16 bit arithmetic (36000<65k). In contrast, the constants given in Doug Manzer's tangent calculation are 9 decimal digits. If this were used to measure angles in a navigation system, one could measure one's longitude anywhere on the earth to to a precision of less than 2 inches. For most problems, this level of precision is overkill. (No criticism of Mr. Manzer here, by the way -- he was accurately describing an algorithm.) Even problems which require a small difference between two large numbers can often be restated to eliminate such pathological behavior. I don't mean to be dogmatic; there _ARE_ valid reasons for FP and high precision operations. However, I would suggest strongly that, if you really need such things, you might consider a more powerful processor (or, more accurately, one with more powerful tools). Part of being a good engineer is choosing the right tools for the task. The PIC is a good tool, but not the only tool. Mike Mullen