I did some crude floating point benchmarks a few months ago. Getting at 24 bit floating point result from: *a 32 bit unsigned integer (ADC accumulator) plus or minus a floating point number took 192us *a 32 bit unsigned integer multiplied by a floating point number took 134us *a 32 bit unsigned integer divided by a floating point number took 264us My device needed one addition and one multiplication to convert an 1024x oversampled ADC reading into a floating point number representing a weight in kilograms (with 5 significant digits of resolution). PIC18F25K22, 16Mhz, XC8 1.37 Pro Application benchmark: 2 divisions, 1 multiplication, 1 subtraction * 24 bit sized floats: 820us * 32 bit sized floats: 1150us Note: my application's math benchmark was not "optimized", (A) division operations can be converted into [faster] multiplication operations, and (B) multiple consecutive multiplications/divisions can be simplified into a single multiplication. The benchmark would have taken half of the time given below had the tricks mentioned above been employed. Many things only require a few samples per second, so often calculation speed is can be negligible. IMHO the biggest penalty imposed by floating point is related to code size. To use sprintf() to display floating point number on a LCD means a big chunk of program memory gets eaten up by library routines. -Jason White On Thu, Aug 25, 2016 at 3:39 PM, Neil wrote: > Oddly, I still do integer math, scale down, and drop a visual d.p. as > needed. > Perhaps I should benchmark 18F floating point to set myself straight! > > Cheers, > -Neil. > > > > On 8/25/2016 3:02 PM, Denny Esterline wrote: > > I think we just touched on the core of the change. 20 years ago, when I > was > > just getting into microcontrollers, I bit-banged lots of different > things. > > I2C, SPI, UART, PWM, RC servo timing, even USB on the AVR parts, I've > done > > them all and more with nothing but GPIO and instruction counts. Today, = I > > can't remember the last time I bit-banged a common protocol. The silico= n > > has become so much more capable while the price has been falling. > > > > I recall spending days playing with tricks of integer math to get > > resolution and scaling without using floating point. "Because > > floating-point takes too long" A couple weeks ago I was walking a junio= r > > (three months out of college, so green I have to water him) through a > > recent project. I was going down the path of integer math, he declared > them > > as floats.... I was about to go into full lecture mode when we actually > > benchmarked the code - the processor still spends 99% of it's time > waiting > > for new data. There's just nothing to be saved in that project to justi= fy > > the time it takes to "improve" the code. > > > > -Denny (who is quickly becoming an old curmudgeon) > > -- > http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive > View/change your membership options at > http://mailman.mit.edu/mailman/listinfo/piclist > --=20 Jason White --=20 http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .