Michael Watterson wrote: > Use fixed point, never floating. This is bad advice. Use whatever is most appropriate and don't get caught up in other people's silly superstitions. > You can even use Log tables with interpolation if you really need > some FP Log tables are rather cumbersome. That's a long way to go just to avoid floating point, especially for religious reasons. > If that isn't sufficient and you really need floating point, then you > ought to consider a CPU that goes MUCH faster and has C or other > decent FP library and even a FP co-processor. Again, use what's appropriate. If a cheap PIC can do the calculations fast enough, then spending more on a floating point processor is a waste. I have done several PIC 16 projects using floating point. FP is often a good choice for PID control calculations since the dynamic range of values can be quite wide or unpredictable. Fixed point values would have to be to= o wide to handle the wide dynamic range with enough meaningful bits left over= .. In one case I did two nested PID control loops using floating point on a 16F877. Sure, the individual FP operations were much slower than they woul= d have been with dedicated FP hardware, but they were fast enough. The loop time was 8ms (125 Hz), which worked just fine. The inner loop controlled the position of a motor, and the outer loop controlled the speed of a gasoline engine that the motor was adjusting the throttle of. There are plenty of real world problems where a small PIC's ability to do floating point computation is fast enough. > You can of course do real FP on PIC16 or PIC18 but RAM penalty is high > and speed very slow. FP usually takes less RAM than the same calculations done in fixed point. Since the dynamic range is usually poorly known, extra bits have to be left around to guarantee minimum precision accross the range. Floating point stores only the significant bits with some bookkeeping overhead. In the double nested PID control case I cited above, there wouldn't have been enough RAM to use 32 bit fixed point instead of 24 bit floating point. And= , the 32 bit fixed point wouldn't have guaranteed the necessary precision accross the possible range of values. Floating point computation code is also easier to write since you don't have to keep worrying about minimum an= d maximum representable values and keeping track of where the binary point is in every number. One way to think of floating point is fixed point with runtime point tracking. Floating point has its place. Avoiding it "just because" is silly superstition. > 24 bits fixed point using 8bit whole numbers and 16 bit fraction, with > 32bit intermediate for multiplcations (16 bit whole number + 16 bit > fraction). is ...? Can I buy you a verb or two? > correction of dp position only needed after multiply or divide. Again not true. Adds (subtracts are really adds with a little extra bookkeeping) require normalization before the addition, which explicitly means the point has to be moved around. Also a add can overflow to require a one bit shift to re-normalize after the add. Adds with negative numbers can result in values down to zero magnitude, so can cause major shifting of the result with respsect to either input value. ******************************************************************** Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products (978) 742-9014. Gold level PIC consultants since 2000. --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .